00:00:00.001 Started by upstream project "autotest-per-patch" build number 132719 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.195 Using shallow fetch with depth 1 00:00:00.196 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.196 > git --version # timeout=10 00:00:00.231 > git --version # 'git version 2.39.2' 00:00:00.231 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.272 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.283 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.294 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.294 > git config core.sparsecheckout # timeout=10 00:00:05.304 > git read-tree -mu HEAD # timeout=10 00:00:05.319 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.343 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.343 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.443 [Pipeline] Start of Pipeline 00:00:05.455 [Pipeline] library 00:00:05.456 Loading library shm_lib@master 00:00:05.456 Library shm_lib@master is cached. Copying from home. 00:00:05.473 [Pipeline] node 00:00:05.481 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.482 [Pipeline] { 00:00:05.491 [Pipeline] catchError 00:00:05.493 [Pipeline] { 00:00:05.504 [Pipeline] wrap 00:00:05.513 [Pipeline] { 00:00:05.521 [Pipeline] stage 00:00:05.523 [Pipeline] { (Prologue) 00:00:05.723 [Pipeline] sh 00:00:06.006 + logger -p user.info -t JENKINS-CI 00:00:06.028 [Pipeline] echo 00:00:06.030 Node: WFP16 00:00:06.040 [Pipeline] sh 00:00:06.342 [Pipeline] setCustomBuildProperty 00:00:06.356 [Pipeline] echo 00:00:06.358 Cleanup processes 00:00:06.363 [Pipeline] sh 00:00:06.647 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.647 1426242 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.657 [Pipeline] sh 00:00:06.943 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.943 ++ grep -v 'sudo pgrep' 00:00:06.943 ++ awk '{print $1}' 00:00:06.943 + sudo kill -9 00:00:06.943 + true 00:00:06.958 [Pipeline] cleanWs 00:00:06.967 [WS-CLEANUP] Deleting project workspace... 00:00:06.967 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.973 [WS-CLEANUP] done 00:00:06.977 [Pipeline] setCustomBuildProperty 00:00:06.992 [Pipeline] sh 00:00:07.285 + sudo git config --global --replace-all safe.directory '*' 00:00:07.384 [Pipeline] httpRequest 00:00:08.033 [Pipeline] echo 00:00:08.035 Sorcerer 10.211.164.101 is alive 00:00:08.043 [Pipeline] retry 00:00:08.045 [Pipeline] { 00:00:08.059 [Pipeline] httpRequest 00:00:08.062 HttpMethod: GET 00:00:08.063 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.063 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.084 Response Code: HTTP/1.1 200 OK 00:00:08.084 Success: Status code 200 is in the accepted range: 200,404 00:00:08.084 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.283 [Pipeline] } 00:00:26.305 [Pipeline] // retry 00:00:26.314 [Pipeline] sh 00:00:26.606 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.624 [Pipeline] httpRequest 00:00:27.049 [Pipeline] echo 00:00:27.052 Sorcerer 10.211.164.101 is alive 00:00:27.062 [Pipeline] retry 00:00:27.065 [Pipeline] { 00:00:27.081 [Pipeline] httpRequest 00:00:27.086 HttpMethod: GET 00:00:27.087 URL: http://10.211.164.101/packages/spdk_50b04b06b3a5c61f7bac0be3559359649afb341e.tar.gz 00:00:27.088 Sending request to url: http://10.211.164.101/packages/spdk_50b04b06b3a5c61f7bac0be3559359649afb341e.tar.gz 00:00:27.107 Response Code: HTTP/1.1 200 OK 00:00:27.107 Success: Status code 200 is in the accepted range: 200,404 00:00:27.108 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_50b04b06b3a5c61f7bac0be3559359649afb341e.tar.gz 00:01:15.948 [Pipeline] } 00:01:15.966 [Pipeline] // retry 00:01:15.975 [Pipeline] sh 00:01:16.268 + tar --no-same-owner -xf spdk_50b04b06b3a5c61f7bac0be3559359649afb341e.tar.gz 00:01:18.829 [Pipeline] sh 00:01:19.117 + git -C spdk log --oneline -n5 00:01:19.117 50b04b06b bdev/compress: Simplify split logic for unmap operation 00:01:19.117 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:19.117 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:19.117 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:19.117 e2dfdf06c accel/mlx5: Register post_poller handler 00:01:19.129 [Pipeline] } 00:01:19.142 [Pipeline] // stage 00:01:19.153 [Pipeline] stage 00:01:19.155 [Pipeline] { (Prepare) 00:01:19.199 [Pipeline] writeFile 00:01:19.216 [Pipeline] sh 00:01:19.501 + logger -p user.info -t JENKINS-CI 00:01:19.512 [Pipeline] sh 00:01:19.796 + logger -p user.info -t JENKINS-CI 00:01:19.807 [Pipeline] sh 00:01:20.092 + cat autorun-spdk.conf 00:01:20.092 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.092 SPDK_TEST_NVMF=1 00:01:20.092 SPDK_TEST_NVME_CLI=1 00:01:20.092 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.092 SPDK_TEST_NVMF_NICS=e810 00:01:20.092 SPDK_TEST_VFIOUSER=1 00:01:20.092 SPDK_RUN_UBSAN=1 00:01:20.092 NET_TYPE=phy 00:01:20.100 RUN_NIGHTLY=0 00:01:20.104 [Pipeline] readFile 00:01:20.128 [Pipeline] withEnv 00:01:20.130 [Pipeline] { 00:01:20.140 [Pipeline] sh 00:01:20.421 + set -ex 00:01:20.421 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:20.421 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:20.421 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.421 ++ SPDK_TEST_NVMF=1 00:01:20.421 ++ SPDK_TEST_NVME_CLI=1 00:01:20.421 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.421 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.421 ++ SPDK_TEST_VFIOUSER=1 00:01:20.421 ++ SPDK_RUN_UBSAN=1 00:01:20.421 ++ NET_TYPE=phy 00:01:20.421 ++ RUN_NIGHTLY=0 00:01:20.421 + case $SPDK_TEST_NVMF_NICS in 00:01:20.421 + DRIVERS=ice 00:01:20.421 + [[ tcp == \r\d\m\a ]] 00:01:20.421 + [[ -n ice ]] 00:01:20.421 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:20.421 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:20.421 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:20.421 rmmod: ERROR: Module irdma is not currently loaded 00:01:20.421 rmmod: ERROR: Module i40iw is not currently loaded 00:01:20.421 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:20.421 + true 00:01:20.421 + for D in $DRIVERS 00:01:20.421 + sudo modprobe ice 00:01:20.421 + exit 0 00:01:20.429 [Pipeline] } 00:01:20.442 [Pipeline] // withEnv 00:01:20.447 [Pipeline] } 00:01:20.460 [Pipeline] // stage 00:01:20.469 [Pipeline] catchError 00:01:20.470 [Pipeline] { 00:01:20.484 [Pipeline] timeout 00:01:20.484 Timeout set to expire in 1 hr 0 min 00:01:20.486 [Pipeline] { 00:01:20.500 [Pipeline] stage 00:01:20.503 [Pipeline] { (Tests) 00:01:20.519 [Pipeline] sh 00:01:20.804 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.804 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.804 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.804 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.804 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.804 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.804 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.804 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.804 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.804 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.804 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.804 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.804 + source /etc/os-release 00:01:20.804 ++ NAME='Fedora Linux' 00:01:20.804 ++ VERSION='39 (Cloud Edition)' 00:01:20.804 ++ ID=fedora 00:01:20.804 ++ VERSION_ID=39 00:01:20.804 ++ VERSION_CODENAME= 00:01:20.804 ++ PLATFORM_ID=platform:f39 00:01:20.804 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.804 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.804 ++ LOGO=fedora-logo-icon 00:01:20.804 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.804 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.804 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.804 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.804 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.804 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.804 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.804 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.804 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.804 ++ SUPPORT_END=2024-11-12 00:01:20.804 ++ VARIANT='Cloud Edition' 00:01:20.804 ++ VARIANT_ID=cloud 00:01:20.804 + uname -a 00:01:20.804 Linux spdk-wfp-16 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.804 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.341 Hugepages 00:01:23.341 node hugesize free / total 00:01:23.341 node0 1048576kB 0 / 0 00:01:23.341 node0 2048kB 0 / 0 00:01:23.341 node1 1048576kB 0 / 0 00:01:23.341 node1 2048kB 0 / 0 00:01:23.341 00:01:23.341 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.341 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:23.341 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:23.341 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:23.341 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:23.341 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:23.341 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:23.341 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:23.341 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:23.341 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:23.341 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:23.341 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:23.341 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:23.341 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:23.341 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:23.341 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:23.341 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:23.341 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:23.341 + rm -f /tmp/spdk-ld-path 00:01:23.341 + source autorun-spdk.conf 00:01:23.341 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.341 ++ SPDK_TEST_NVMF=1 00:01:23.341 ++ SPDK_TEST_NVME_CLI=1 00:01:23.341 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.341 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.341 ++ SPDK_TEST_VFIOUSER=1 00:01:23.341 ++ SPDK_RUN_UBSAN=1 00:01:23.341 ++ NET_TYPE=phy 00:01:23.341 ++ RUN_NIGHTLY=0 00:01:23.341 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.341 + [[ -n '' ]] 00:01:23.341 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.341 + for M in /var/spdk/build-*-manifest.txt 00:01:23.341 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.341 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.341 + for M in /var/spdk/build-*-manifest.txt 00:01:23.341 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.341 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.341 + for M in /var/spdk/build-*-manifest.txt 00:01:23.341 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.341 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.341 ++ uname 00:01:23.341 + [[ Linux == \L\i\n\u\x ]] 00:01:23.341 + sudo dmesg -T 00:01:23.600 + sudo dmesg --clear 00:01:23.600 + dmesg_pid=1427166 00:01:23.600 + [[ Fedora Linux == FreeBSD ]] 00:01:23.600 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.600 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.600 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.600 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.600 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.600 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.600 + sudo dmesg -Tw 00:01:23.600 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.600 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.600 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.600 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.600 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.600 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.600 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.600 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.600 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.600 11:02:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.600 11:02:56 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.600 11:02:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.600 11:02:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:23.600 11:02:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:23.600 11:02:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.600 11:02:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:23.600 11:02:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:23.601 11:02:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:23.601 11:02:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:23.601 11:02:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:23.601 11:02:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.601 11:02:56 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.601 11:02:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.601 11:02:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.601 11:02:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:23.601 11:02:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.601 11:02:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.601 11:02:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.601 11:02:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.601 11:02:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.601 11:02:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.601 11:02:56 -- paths/export.sh@5 -- $ export PATH 00:01:23.601 11:02:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.601 11:02:56 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.601 11:02:56 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:23.601 11:02:56 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733479376.XXXXXX 00:01:23.601 11:02:56 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733479376.XCdzkl 00:01:23.601 11:02:56 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:23.601 11:02:56 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:23.601 11:02:56 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:23.601 11:02:56 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.601 11:02:56 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.601 11:02:56 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:23.601 11:02:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:23.601 11:02:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.601 11:02:56 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:23.601 11:02:56 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:23.601 11:02:56 -- pm/common@17 -- $ local monitor 00:01:23.601 11:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.601 11:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.601 11:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.601 11:02:56 -- pm/common@21 -- $ date +%s 00:01:23.601 11:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.601 11:02:56 -- pm/common@21 -- $ date +%s 00:01:23.601 11:02:56 -- pm/common@25 -- $ sleep 1 00:01:23.601 11:02:56 -- pm/common@21 -- $ date +%s 00:01:23.601 11:02:56 -- pm/common@21 -- $ date +%s 00:01:23.601 11:02:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733479376 00:01:23.601 11:02:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733479376 00:01:23.601 11:02:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733479376 00:01:23.601 11:02:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733479376 00:01:23.861 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733479376_collect-cpu-load.pm.log 00:01:23.861 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733479376_collect-vmstat.pm.log 00:01:23.861 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733479376_collect-cpu-temp.pm.log 00:01:23.861 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733479376_collect-bmc-pm.bmc.pm.log 00:01:24.802 11:02:57 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:24.802 11:02:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.802 11:02:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.802 11:02:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.802 11:02:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.802 Fri Dec 6 10:02:57 AM UTC 2024 00:01:24.802 11:02:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.802 v25.01-pre-304-g50b04b06b 00:01:24.802 11:02:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.802 11:02:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.802 11:02:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.802 11:02:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.802 11:02:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.802 11:02:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.802 ************************************ 00:01:24.802 START TEST ubsan 00:01:24.802 ************************************ 00:01:24.802 11:02:57 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:24.802 using ubsan 00:01:24.802 00:01:24.802 real 0m0.000s 00:01:24.802 user 0m0.000s 00:01:24.802 sys 0m0.000s 00:01:24.802 11:02:57 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.802 11:02:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.802 ************************************ 00:01:24.802 END TEST ubsan 00:01:24.802 ************************************ 00:01:24.802 11:02:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.802 11:02:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.802 11:02:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.802 11:02:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.802 11:02:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.802 11:02:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.802 11:02:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.802 11:02:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.803 11:02:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:25.062 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:25.062 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:25.322 Using 'verbs' RDMA provider 00:01:38.111 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:50.329 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:50.329 Creating mk/config.mk...done. 00:01:50.329 Creating mk/cc.flags.mk...done. 00:01:50.329 Type 'make' to build. 00:01:50.329 11:03:22 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:50.329 11:03:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:50.329 11:03:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:50.329 11:03:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.329 ************************************ 00:01:50.329 START TEST make 00:01:50.329 ************************************ 00:01:50.329 11:03:22 make -- common/autotest_common.sh@1129 -- $ make -j112 00:01:50.329 make[1]: Nothing to be done for 'all'. 00:01:51.714 The Meson build system 00:01:51.714 Version: 1.5.0 00:01:51.714 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:51.714 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.714 Build type: native build 00:01:51.714 Project name: libvfio-user 00:01:51.714 Project version: 0.0.1 00:01:51.714 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:51.714 C linker for the host machine: cc ld.bfd 2.40-14 00:01:51.714 Host machine cpu family: x86_64 00:01:51.714 Host machine cpu: x86_64 00:01:51.714 Run-time dependency threads found: YES 00:01:51.714 Library dl found: YES 00:01:51.714 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:51.714 Run-time dependency json-c found: YES 0.17 00:01:51.714 Run-time dependency cmocka found: YES 1.1.7 00:01:51.714 Program pytest-3 found: NO 00:01:51.714 Program flake8 found: NO 00:01:51.714 Program misspell-fixer found: NO 00:01:51.714 Program restructuredtext-lint found: NO 00:01:51.714 Program valgrind found: YES (/usr/bin/valgrind) 00:01:51.714 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.714 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.714 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.714 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.714 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:51.714 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:51.714 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.714 Build targets in project: 8 00:01:51.714 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:51.714 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:51.714 00:01:51.714 libvfio-user 0.0.1 00:01:51.714 00:01:51.714 User defined options 00:01:51.714 buildtype : debug 00:01:51.714 default_library: shared 00:01:51.714 libdir : /usr/local/lib 00:01:51.714 00:01:51.714 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.281 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:52.281 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:52.281 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:52.281 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:52.281 [4/37] Compiling C object samples/null.p/null.c.o 00:01:52.282 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:52.282 [6/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:52.282 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:52.282 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:52.282 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:52.282 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:52.282 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:52.282 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:52.282 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:52.282 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:52.282 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:52.282 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:52.282 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:52.282 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:52.282 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:52.282 [20/37] Compiling C object samples/server.p/server.c.o 00:01:52.282 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:52.282 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:52.282 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:52.282 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:52.282 [25/37] Compiling C object samples/client.p/client.c.o 00:01:52.282 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:52.282 [27/37] Linking target samples/client 00:01:52.282 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:52.282 [29/37] Linking target test/unit_tests 00:01:52.540 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:52.540 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:52.540 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:52.540 [33/37] Linking target samples/gpio-pci-idio-16 00:01:52.540 [34/37] Linking target samples/null 00:01:52.540 [35/37] Linking target samples/server 00:01:52.540 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:52.540 [37/37] Linking target samples/lspci 00:01:52.540 INFO: autodetecting backend as ninja 00:01:52.540 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:52.799 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:53.090 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:53.090 ninja: no work to do. 00:01:58.401 The Meson build system 00:01:58.401 Version: 1.5.0 00:01:58.401 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:58.401 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:58.401 Build type: native build 00:01:58.401 Program cat found: YES (/usr/bin/cat) 00:01:58.401 Project name: DPDK 00:01:58.401 Project version: 24.03.0 00:01:58.401 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:58.401 C linker for the host machine: cc ld.bfd 2.40-14 00:01:58.401 Host machine cpu family: x86_64 00:01:58.401 Host machine cpu: x86_64 00:01:58.401 Message: ## Building in Developer Mode ## 00:01:58.401 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.401 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:58.401 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.401 Program python3 found: YES (/usr/bin/python3) 00:01:58.401 Program cat found: YES (/usr/bin/cat) 00:01:58.401 Compiler for C supports arguments -march=native: YES 00:01:58.401 Checking for size of "void *" : 8 00:01:58.401 Checking for size of "void *" : 8 (cached) 00:01:58.401 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:58.401 Library m found: YES 00:01:58.401 Library numa found: YES 00:01:58.401 Has header "numaif.h" : YES 00:01:58.401 Library fdt found: NO 00:01:58.401 Library execinfo found: NO 00:01:58.401 Has header "execinfo.h" : YES 00:01:58.401 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:58.401 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.401 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.401 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.401 Run-time dependency openssl found: YES 3.1.1 00:01:58.401 Run-time dependency libpcap found: YES 1.10.4 00:01:58.401 Has header "pcap.h" with dependency libpcap: YES 00:01:58.401 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.401 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.401 Compiler for C supports arguments -Wformat: YES 00:01:58.401 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.401 Compiler for C supports arguments -Wformat-security: NO 00:01:58.401 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.401 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.401 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.401 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.401 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.401 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.401 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.401 Compiler for C supports arguments -Wundef: YES 00:01:58.401 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.401 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.401 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.401 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.401 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.401 Program objdump found: YES (/usr/bin/objdump) 00:01:58.401 Compiler for C supports arguments -mavx512f: YES 00:01:58.401 Checking if "AVX512 checking" compiles: YES 00:01:58.401 Fetching value of define "__SSE4_2__" : 1 00:01:58.401 Fetching value of define "__AES__" : 1 00:01:58.401 Fetching value of define "__AVX__" : 1 00:01:58.401 Fetching value of define "__AVX2__" : 1 00:01:58.401 Fetching value of define "__AVX512BW__" : 1 00:01:58.401 Fetching value of define "__AVX512CD__" : 1 00:01:58.401 Fetching value of define "__AVX512DQ__" : 1 00:01:58.401 Fetching value of define "__AVX512F__" : 1 00:01:58.401 Fetching value of define "__AVX512VL__" : 1 00:01:58.402 Fetching value of define "__PCLMUL__" : 1 00:01:58.402 Fetching value of define "__RDRND__" : 1 00:01:58.402 Fetching value of define "__RDSEED__" : 1 00:01:58.402 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:58.402 Fetching value of define "__znver1__" : (undefined) 00:01:58.402 Fetching value of define "__znver2__" : (undefined) 00:01:58.402 Fetching value of define "__znver3__" : (undefined) 00:01:58.402 Fetching value of define "__znver4__" : (undefined) 00:01:58.402 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.402 Message: lib/log: Defining dependency "log" 00:01:58.402 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.402 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.402 Checking for function "getentropy" : NO 00:01:58.402 Message: lib/eal: Defining dependency "eal" 00:01:58.402 Message: lib/ring: Defining dependency "ring" 00:01:58.402 Message: lib/rcu: Defining dependency "rcu" 00:01:58.402 Message: lib/mempool: Defining dependency "mempool" 00:01:58.402 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.402 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.402 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.402 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:58.402 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:58.402 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:58.402 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:58.402 Compiler for C supports arguments -mpclmul: YES 00:01:58.402 Compiler for C supports arguments -maes: YES 00:01:58.402 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.402 Compiler for C supports arguments -mavx512bw: YES 00:01:58.402 Compiler for C supports arguments -mavx512dq: YES 00:01:58.402 Compiler for C supports arguments -mavx512vl: YES 00:01:58.402 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.402 Compiler for C supports arguments -mavx2: YES 00:01:58.402 Compiler for C supports arguments -mavx: YES 00:01:58.402 Message: lib/net: Defining dependency "net" 00:01:58.402 Message: lib/meter: Defining dependency "meter" 00:01:58.402 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.402 Message: lib/pci: Defining dependency "pci" 00:01:58.402 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.402 Message: lib/hash: Defining dependency "hash" 00:01:58.402 Message: lib/timer: Defining dependency "timer" 00:01:58.402 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.402 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.402 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.402 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.402 Message: lib/power: Defining dependency "power" 00:01:58.402 Message: lib/reorder: Defining dependency "reorder" 00:01:58.402 Message: lib/security: Defining dependency "security" 00:01:58.402 Has header "linux/userfaultfd.h" : YES 00:01:58.402 Has header "linux/vduse.h" : YES 00:01:58.402 Message: lib/vhost: Defining dependency "vhost" 00:01:58.402 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.402 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.402 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.402 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.402 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:58.402 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:58.402 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:58.402 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:58.402 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:58.402 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:58.402 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:58.402 Configuring doxy-api-html.conf using configuration 00:01:58.402 Configuring doxy-api-man.conf using configuration 00:01:58.402 Program mandb found: YES (/usr/bin/mandb) 00:01:58.402 Program sphinx-build found: NO 00:01:58.402 Configuring rte_build_config.h using configuration 00:01:58.402 Message: 00:01:58.402 ================= 00:01:58.402 Applications Enabled 00:01:58.402 ================= 00:01:58.402 00:01:58.402 apps: 00:01:58.402 00:01:58.402 00:01:58.402 Message: 00:01:58.402 ================= 00:01:58.402 Libraries Enabled 00:01:58.402 ================= 00:01:58.402 00:01:58.402 libs: 00:01:58.402 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.402 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:58.402 cryptodev, dmadev, power, reorder, security, vhost, 00:01:58.402 00:01:58.402 Message: 00:01:58.402 =============== 00:01:58.402 Drivers Enabled 00:01:58.402 =============== 00:01:58.402 00:01:58.402 common: 00:01:58.402 00:01:58.402 bus: 00:01:58.402 pci, vdev, 00:01:58.402 mempool: 00:01:58.402 ring, 00:01:58.402 dma: 00:01:58.402 00:01:58.402 net: 00:01:58.402 00:01:58.402 crypto: 00:01:58.402 00:01:58.402 compress: 00:01:58.402 00:01:58.402 vdpa: 00:01:58.402 00:01:58.402 00:01:58.402 Message: 00:01:58.402 ================= 00:01:58.402 Content Skipped 00:01:58.402 ================= 00:01:58.402 00:01:58.402 apps: 00:01:58.402 dumpcap: explicitly disabled via build config 00:01:58.402 graph: explicitly disabled via build config 00:01:58.402 pdump: explicitly disabled via build config 00:01:58.402 proc-info: explicitly disabled via build config 00:01:58.402 test-acl: explicitly disabled via build config 00:01:58.402 test-bbdev: explicitly disabled via build config 00:01:58.402 test-cmdline: explicitly disabled via build config 00:01:58.402 test-compress-perf: explicitly disabled via build config 00:01:58.402 test-crypto-perf: explicitly disabled via build config 00:01:58.402 test-dma-perf: explicitly disabled via build config 00:01:58.402 test-eventdev: explicitly disabled via build config 00:01:58.402 test-fib: explicitly disabled via build config 00:01:58.402 test-flow-perf: explicitly disabled via build config 00:01:58.402 test-gpudev: explicitly disabled via build config 00:01:58.402 test-mldev: explicitly disabled via build config 00:01:58.402 test-pipeline: explicitly disabled via build config 00:01:58.402 test-pmd: explicitly disabled via build config 00:01:58.402 test-regex: explicitly disabled via build config 00:01:58.402 test-sad: explicitly disabled via build config 00:01:58.402 test-security-perf: explicitly disabled via build config 00:01:58.402 00:01:58.402 libs: 00:01:58.402 argparse: explicitly disabled via build config 00:01:58.402 metrics: explicitly disabled via build config 00:01:58.402 acl: explicitly disabled via build config 00:01:58.402 bbdev: explicitly disabled via build config 00:01:58.402 bitratestats: explicitly disabled via build config 00:01:58.402 bpf: explicitly disabled via build config 00:01:58.402 cfgfile: explicitly disabled via build config 00:01:58.402 distributor: explicitly disabled via build config 00:01:58.402 efd: explicitly disabled via build config 00:01:58.402 eventdev: explicitly disabled via build config 00:01:58.402 dispatcher: explicitly disabled via build config 00:01:58.402 gpudev: explicitly disabled via build config 00:01:58.403 gro: explicitly disabled via build config 00:01:58.403 gso: explicitly disabled via build config 00:01:58.403 ip_frag: explicitly disabled via build config 00:01:58.403 jobstats: explicitly disabled via build config 00:01:58.403 latencystats: explicitly disabled via build config 00:01:58.403 lpm: explicitly disabled via build config 00:01:58.403 member: explicitly disabled via build config 00:01:58.403 pcapng: explicitly disabled via build config 00:01:58.403 rawdev: explicitly disabled via build config 00:01:58.403 regexdev: explicitly disabled via build config 00:01:58.403 mldev: explicitly disabled via build config 00:01:58.403 rib: explicitly disabled via build config 00:01:58.403 sched: explicitly disabled via build config 00:01:58.403 stack: explicitly disabled via build config 00:01:58.403 ipsec: explicitly disabled via build config 00:01:58.403 pdcp: explicitly disabled via build config 00:01:58.403 fib: explicitly disabled via build config 00:01:58.403 port: explicitly disabled via build config 00:01:58.403 pdump: explicitly disabled via build config 00:01:58.403 table: explicitly disabled via build config 00:01:58.403 pipeline: explicitly disabled via build config 00:01:58.403 graph: explicitly disabled via build config 00:01:58.403 node: explicitly disabled via build config 00:01:58.403 00:01:58.403 drivers: 00:01:58.403 common/cpt: not in enabled drivers build config 00:01:58.403 common/dpaax: not in enabled drivers build config 00:01:58.403 common/iavf: not in enabled drivers build config 00:01:58.403 common/idpf: not in enabled drivers build config 00:01:58.403 common/ionic: not in enabled drivers build config 00:01:58.403 common/mvep: not in enabled drivers build config 00:01:58.403 common/octeontx: not in enabled drivers build config 00:01:58.403 bus/auxiliary: not in enabled drivers build config 00:01:58.403 bus/cdx: not in enabled drivers build config 00:01:58.403 bus/dpaa: not in enabled drivers build config 00:01:58.403 bus/fslmc: not in enabled drivers build config 00:01:58.403 bus/ifpga: not in enabled drivers build config 00:01:58.403 bus/platform: not in enabled drivers build config 00:01:58.403 bus/uacce: not in enabled drivers build config 00:01:58.403 bus/vmbus: not in enabled drivers build config 00:01:58.403 common/cnxk: not in enabled drivers build config 00:01:58.403 common/mlx5: not in enabled drivers build config 00:01:58.403 common/nfp: not in enabled drivers build config 00:01:58.403 common/nitrox: not in enabled drivers build config 00:01:58.403 common/qat: not in enabled drivers build config 00:01:58.403 common/sfc_efx: not in enabled drivers build config 00:01:58.403 mempool/bucket: not in enabled drivers build config 00:01:58.403 mempool/cnxk: not in enabled drivers build config 00:01:58.403 mempool/dpaa: not in enabled drivers build config 00:01:58.403 mempool/dpaa2: not in enabled drivers build config 00:01:58.403 mempool/octeontx: not in enabled drivers build config 00:01:58.403 mempool/stack: not in enabled drivers build config 00:01:58.403 dma/cnxk: not in enabled drivers build config 00:01:58.403 dma/dpaa: not in enabled drivers build config 00:01:58.403 dma/dpaa2: not in enabled drivers build config 00:01:58.403 dma/hisilicon: not in enabled drivers build config 00:01:58.403 dma/idxd: not in enabled drivers build config 00:01:58.403 dma/ioat: not in enabled drivers build config 00:01:58.403 dma/skeleton: not in enabled drivers build config 00:01:58.403 net/af_packet: not in enabled drivers build config 00:01:58.403 net/af_xdp: not in enabled drivers build config 00:01:58.403 net/ark: not in enabled drivers build config 00:01:58.403 net/atlantic: not in enabled drivers build config 00:01:58.403 net/avp: not in enabled drivers build config 00:01:58.403 net/axgbe: not in enabled drivers build config 00:01:58.403 net/bnx2x: not in enabled drivers build config 00:01:58.403 net/bnxt: not in enabled drivers build config 00:01:58.403 net/bonding: not in enabled drivers build config 00:01:58.403 net/cnxk: not in enabled drivers build config 00:01:58.403 net/cpfl: not in enabled drivers build config 00:01:58.403 net/cxgbe: not in enabled drivers build config 00:01:58.403 net/dpaa: not in enabled drivers build config 00:01:58.403 net/dpaa2: not in enabled drivers build config 00:01:58.403 net/e1000: not in enabled drivers build config 00:01:58.403 net/ena: not in enabled drivers build config 00:01:58.403 net/enetc: not in enabled drivers build config 00:01:58.403 net/enetfec: not in enabled drivers build config 00:01:58.403 net/enic: not in enabled drivers build config 00:01:58.403 net/failsafe: not in enabled drivers build config 00:01:58.403 net/fm10k: not in enabled drivers build config 00:01:58.403 net/gve: not in enabled drivers build config 00:01:58.403 net/hinic: not in enabled drivers build config 00:01:58.403 net/hns3: not in enabled drivers build config 00:01:58.403 net/i40e: not in enabled drivers build config 00:01:58.403 net/iavf: not in enabled drivers build config 00:01:58.403 net/ice: not in enabled drivers build config 00:01:58.403 net/idpf: not in enabled drivers build config 00:01:58.403 net/igc: not in enabled drivers build config 00:01:58.403 net/ionic: not in enabled drivers build config 00:01:58.403 net/ipn3ke: not in enabled drivers build config 00:01:58.403 net/ixgbe: not in enabled drivers build config 00:01:58.403 net/mana: not in enabled drivers build config 00:01:58.403 net/memif: not in enabled drivers build config 00:01:58.403 net/mlx4: not in enabled drivers build config 00:01:58.403 net/mlx5: not in enabled drivers build config 00:01:58.403 net/mvneta: not in enabled drivers build config 00:01:58.403 net/mvpp2: not in enabled drivers build config 00:01:58.403 net/netvsc: not in enabled drivers build config 00:01:58.403 net/nfb: not in enabled drivers build config 00:01:58.403 net/nfp: not in enabled drivers build config 00:01:58.403 net/ngbe: not in enabled drivers build config 00:01:58.403 net/null: not in enabled drivers build config 00:01:58.403 net/octeontx: not in enabled drivers build config 00:01:58.403 net/octeon_ep: not in enabled drivers build config 00:01:58.403 net/pcap: not in enabled drivers build config 00:01:58.403 net/pfe: not in enabled drivers build config 00:01:58.403 net/qede: not in enabled drivers build config 00:01:58.403 net/ring: not in enabled drivers build config 00:01:58.403 net/sfc: not in enabled drivers build config 00:01:58.403 net/softnic: not in enabled drivers build config 00:01:58.403 net/tap: not in enabled drivers build config 00:01:58.403 net/thunderx: not in enabled drivers build config 00:01:58.403 net/txgbe: not in enabled drivers build config 00:01:58.403 net/vdev_netvsc: not in enabled drivers build config 00:01:58.403 net/vhost: not in enabled drivers build config 00:01:58.403 net/virtio: not in enabled drivers build config 00:01:58.403 net/vmxnet3: not in enabled drivers build config 00:01:58.403 raw/*: missing internal dependency, "rawdev" 00:01:58.403 crypto/armv8: not in enabled drivers build config 00:01:58.403 crypto/bcmfs: not in enabled drivers build config 00:01:58.403 crypto/caam_jr: not in enabled drivers build config 00:01:58.403 crypto/ccp: not in enabled drivers build config 00:01:58.403 crypto/cnxk: not in enabled drivers build config 00:01:58.403 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.403 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.403 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.403 crypto/mlx5: not in enabled drivers build config 00:01:58.403 crypto/mvsam: not in enabled drivers build config 00:01:58.403 crypto/nitrox: not in enabled drivers build config 00:01:58.403 crypto/null: not in enabled drivers build config 00:01:58.403 crypto/octeontx: not in enabled drivers build config 00:01:58.403 crypto/openssl: not in enabled drivers build config 00:01:58.403 crypto/scheduler: not in enabled drivers build config 00:01:58.403 crypto/uadk: not in enabled drivers build config 00:01:58.404 crypto/virtio: not in enabled drivers build config 00:01:58.404 compress/isal: not in enabled drivers build config 00:01:58.404 compress/mlx5: not in enabled drivers build config 00:01:58.404 compress/nitrox: not in enabled drivers build config 00:01:58.404 compress/octeontx: not in enabled drivers build config 00:01:58.404 compress/zlib: not in enabled drivers build config 00:01:58.404 regex/*: missing internal dependency, "regexdev" 00:01:58.404 ml/*: missing internal dependency, "mldev" 00:01:58.404 vdpa/ifc: not in enabled drivers build config 00:01:58.404 vdpa/mlx5: not in enabled drivers build config 00:01:58.404 vdpa/nfp: not in enabled drivers build config 00:01:58.404 vdpa/sfc: not in enabled drivers build config 00:01:58.404 event/*: missing internal dependency, "eventdev" 00:01:58.404 baseband/*: missing internal dependency, "bbdev" 00:01:58.404 gpu/*: missing internal dependency, "gpudev" 00:01:58.404 00:01:58.404 00:01:58.404 Build targets in project: 85 00:01:58.404 00:01:58.404 DPDK 24.03.0 00:01:58.404 00:01:58.404 User defined options 00:01:58.404 buildtype : debug 00:01:58.404 default_library : shared 00:01:58.404 libdir : lib 00:01:58.404 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:58.404 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:58.404 c_link_args : 00:01:58.404 cpu_instruction_set: native 00:01:58.404 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:58.404 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:58.404 enable_docs : false 00:01:58.404 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:58.404 enable_kmods : false 00:01:58.404 max_lcores : 128 00:01:58.404 tests : false 00:01:58.404 00:01:58.404 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.404 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:58.672 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.672 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.672 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.672 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.672 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:58.672 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.672 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.672 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:58.672 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.672 [10/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.672 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.672 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.672 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.672 [14/268] Linking static target lib/librte_kvargs.a 00:01:58.672 [15/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.672 [16/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:58.672 [17/268] Linking static target lib/librte_log.a 00:01:58.934 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.934 [19/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.934 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.934 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.934 [22/268] Linking static target lib/librte_pci.a 00:01:58.934 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.934 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.934 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.934 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.934 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.934 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.934 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.934 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.934 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.934 [32/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:58.934 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:59.191 [34/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:59.191 [35/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:59.191 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.191 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:59.191 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:59.191 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:59.191 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:59.191 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:59.191 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:59.191 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:59.191 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:59.191 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:59.191 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:59.191 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:59.191 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:59.191 [49/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:59.191 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:59.191 [51/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:59.191 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:59.191 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:59.191 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:59.191 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:59.191 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:59.191 [57/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:59.191 [58/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:59.191 [59/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:59.191 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:59.191 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:59.191 [62/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:59.191 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:59.191 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:59.191 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:59.191 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:59.191 [67/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:59.191 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:59.191 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:59.191 [70/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:59.191 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:59.191 [72/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:59.191 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:59.191 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.191 [75/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:59.191 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:59.191 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:59.191 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:59.191 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:59.191 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:59.191 [81/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:59.191 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:59.191 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:59.191 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:59.191 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:59.191 [86/268] Linking static target lib/librte_meter.a 00:01:59.450 [87/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.450 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:59.450 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:59.450 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:59.450 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:59.450 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:59.450 [93/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:59.450 [94/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:59.450 [95/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.450 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:59.450 [97/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.450 [98/268] Linking static target lib/librte_ring.a 00:01:59.450 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:59.450 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:59.450 [101/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:59.450 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:59.450 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:59.450 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:59.450 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:59.450 [106/268] Linking static target lib/librte_telemetry.a 00:01:59.450 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:59.450 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.450 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:59.450 [110/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:59.450 [111/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:59.450 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:59.450 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:59.450 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:59.450 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.450 [116/268] Linking static target lib/librte_cmdline.a 00:01:59.450 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:59.450 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:59.450 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:59.450 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:59.450 [121/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:59.450 [122/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:59.450 [123/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:59.450 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:59.450 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:59.450 [126/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:59.450 [127/268] Linking static target lib/librte_mempool.a 00:01:59.450 [128/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:59.450 [129/268] Linking static target lib/librte_rcu.a 00:01:59.450 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:59.450 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:59.450 [132/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:59.451 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:59.451 [134/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:59.451 [135/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:59.451 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:59.451 [137/268] Linking static target lib/librte_net.a 00:01:59.451 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:59.451 [139/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:59.451 [140/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:59.451 [141/268] Linking static target lib/librte_eal.a 00:01:59.451 [142/268] Linking static target lib/librte_timer.a 00:01:59.451 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:59.451 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:59.451 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:59.451 [146/268] Linking static target lib/librte_compressdev.a 00:01:59.451 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:59.451 [148/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:59.451 [149/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.451 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:59.451 [151/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.451 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.451 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:59.451 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:59.451 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:59.451 [156/268] Linking static target lib/librte_mbuf.a 00:01:59.451 [157/268] Linking static target lib/librte_dmadev.a 00:01:59.451 [158/268] Linking target lib/librte_log.so.24.1 00:01:59.451 [159/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:59.451 [160/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:59.710 [161/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.710 [162/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.710 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:59.710 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.710 [165/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.710 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.710 [167/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.710 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.710 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.710 [170/268] Linking static target lib/librte_hash.a 00:01:59.710 [171/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:59.710 [172/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.710 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.710 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:59.710 [175/268] Linking static target lib/librte_reorder.a 00:01:59.710 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.710 [177/268] Linking target lib/librte_kvargs.so.24.1 00:01:59.710 [178/268] Linking static target lib/librte_power.a 00:01:59.710 [179/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.710 [180/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.710 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:59.710 [182/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.710 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.710 [184/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.710 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.710 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.710 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.711 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.969 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:59.969 [190/268] Linking static target lib/librte_security.a 00:01:59.969 [191/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.969 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.969 [193/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.969 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.969 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.969 [196/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.969 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.969 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.969 [199/268] Linking static target lib/librte_cryptodev.a 00:01:59.969 [200/268] Linking target lib/librte_telemetry.so.24.1 00:01:59.969 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:59.969 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.969 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.969 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.969 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.969 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.969 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.969 [208/268] Linking static target drivers/librte_mempool_ring.a 00:01:59.969 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.969 [210/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:59.969 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.969 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.969 [213/268] Linking static target drivers/librte_bus_pci.a 00:02:00.228 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.228 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.228 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.228 [217/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.228 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.228 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.228 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:00.228 [221/268] Linking static target lib/librte_ethdev.a 00:02:00.486 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.486 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.486 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.486 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:00.486 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.743 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.677 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.677 [229/268] Linking static target lib/librte_vhost.a 00:02:01.677 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.582 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.859 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.427 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.427 [234/268] Linking target lib/librte_eal.so.24.1 00:02:09.685 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:09.685 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:09.685 [237/268] Linking target lib/librte_ring.so.24.1 00:02:09.685 [238/268] Linking target lib/librte_timer.so.24.1 00:02:09.685 [239/268] Linking target lib/librte_meter.so.24.1 00:02:09.685 [240/268] Linking target lib/librte_pci.so.24.1 00:02:09.685 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:09.685 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:09.685 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:09.685 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:09.685 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:09.944 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:09.944 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:09.944 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:09.944 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:09.944 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:09.944 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:09.944 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:09.944 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:10.203 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:10.203 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:10.203 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:10.203 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:10.203 [258/268] Linking target lib/librte_net.so.24.1 00:02:10.203 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:10.203 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:10.461 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:10.461 [262/268] Linking target lib/librte_hash.so.24.1 00:02:10.461 [263/268] Linking target lib/librte_security.so.24.1 00:02:10.461 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:10.462 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:10.462 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:10.462 [267/268] Linking target lib/librte_power.so.24.1 00:02:10.462 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:10.462 INFO: autodetecting backend as ninja 00:02:10.462 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:20.443 CC lib/log/log.o 00:02:20.443 CC lib/log/log_flags.o 00:02:20.443 CC lib/log/log_deprecated.o 00:02:20.443 CC lib/ut_mock/mock.o 00:02:20.443 CC lib/ut/ut.o 00:02:20.443 LIB libspdk_log.a 00:02:20.443 LIB libspdk_ut.a 00:02:20.443 LIB libspdk_ut_mock.a 00:02:20.443 SO libspdk_log.so.7.1 00:02:20.443 SO libspdk_ut.so.2.0 00:02:20.443 SO libspdk_ut_mock.so.6.0 00:02:20.443 SYMLINK libspdk_log.so 00:02:20.443 SYMLINK libspdk_ut.so 00:02:20.443 SYMLINK libspdk_ut_mock.so 00:02:20.443 CXX lib/trace_parser/trace.o 00:02:20.443 CC lib/dma/dma.o 00:02:20.443 CC lib/util/base64.o 00:02:20.443 CC lib/ioat/ioat.o 00:02:20.443 CC lib/util/bit_array.o 00:02:20.443 CC lib/util/cpuset.o 00:02:20.443 CC lib/util/crc16.o 00:02:20.443 CC lib/util/crc32.o 00:02:20.443 CC lib/util/crc32c.o 00:02:20.443 CC lib/util/crc32_ieee.o 00:02:20.443 CC lib/util/dif.o 00:02:20.443 CC lib/util/crc64.o 00:02:20.443 CC lib/util/fd.o 00:02:20.443 CC lib/util/fd_group.o 00:02:20.443 CC lib/util/file.o 00:02:20.443 CC lib/util/hexlify.o 00:02:20.443 CC lib/util/iov.o 00:02:20.443 CC lib/util/math.o 00:02:20.443 CC lib/util/net.o 00:02:20.443 CC lib/util/pipe.o 00:02:20.443 CC lib/util/strerror_tls.o 00:02:20.443 CC lib/util/string.o 00:02:20.443 CC lib/util/uuid.o 00:02:20.443 CC lib/util/xor.o 00:02:20.443 CC lib/util/zipf.o 00:02:20.443 CC lib/util/md5.o 00:02:20.443 CC lib/vfio_user/host/vfio_user_pci.o 00:02:20.443 CC lib/vfio_user/host/vfio_user.o 00:02:20.443 LIB libspdk_dma.a 00:02:20.443 SO libspdk_dma.so.5.0 00:02:20.443 LIB libspdk_ioat.a 00:02:20.443 SYMLINK libspdk_dma.so 00:02:20.443 SO libspdk_ioat.so.7.0 00:02:20.443 SYMLINK libspdk_ioat.so 00:02:20.443 LIB libspdk_vfio_user.a 00:02:20.443 SO libspdk_vfio_user.so.5.0 00:02:20.443 LIB libspdk_util.a 00:02:20.443 SYMLINK libspdk_vfio_user.so 00:02:20.702 SO libspdk_util.so.10.1 00:02:20.702 SYMLINK libspdk_util.so 00:02:20.702 LIB libspdk_trace_parser.a 00:02:20.702 SO libspdk_trace_parser.so.6.0 00:02:20.962 SYMLINK libspdk_trace_parser.so 00:02:20.962 CC lib/rdma_utils/rdma_utils.o 00:02:20.962 CC lib/json/json_parse.o 00:02:20.962 CC lib/json/json_util.o 00:02:20.962 CC lib/json/json_write.o 00:02:20.962 CC lib/vmd/vmd.o 00:02:20.962 CC lib/vmd/led.o 00:02:20.962 CC lib/idxd/idxd.o 00:02:20.962 CC lib/env_dpdk/env.o 00:02:20.962 CC lib/conf/conf.o 00:02:20.962 CC lib/env_dpdk/memory.o 00:02:20.962 CC lib/idxd/idxd_user.o 00:02:20.962 CC lib/env_dpdk/pci.o 00:02:20.962 CC lib/idxd/idxd_kernel.o 00:02:20.962 CC lib/env_dpdk/init.o 00:02:20.962 CC lib/env_dpdk/threads.o 00:02:20.962 CC lib/env_dpdk/pci_ioat.o 00:02:20.962 CC lib/env_dpdk/pci_virtio.o 00:02:20.962 CC lib/env_dpdk/pci_vmd.o 00:02:20.962 CC lib/env_dpdk/pci_idxd.o 00:02:20.962 CC lib/env_dpdk/pci_event.o 00:02:20.962 CC lib/env_dpdk/sigbus_handler.o 00:02:20.962 CC lib/env_dpdk/pci_dpdk.o 00:02:20.962 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:20.962 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.220 LIB libspdk_conf.a 00:02:21.220 LIB libspdk_rdma_utils.a 00:02:21.220 SO libspdk_conf.so.6.0 00:02:21.220 LIB libspdk_json.a 00:02:21.220 SO libspdk_rdma_utils.so.1.0 00:02:21.220 SO libspdk_json.so.6.0 00:02:21.220 SYMLINK libspdk_conf.so 00:02:21.220 SYMLINK libspdk_rdma_utils.so 00:02:21.479 SYMLINK libspdk_json.so 00:02:21.479 LIB libspdk_idxd.a 00:02:21.479 SO libspdk_idxd.so.12.1 00:02:21.479 LIB libspdk_vmd.a 00:02:21.479 SO libspdk_vmd.so.6.0 00:02:21.479 SYMLINK libspdk_idxd.so 00:02:21.479 SYMLINK libspdk_vmd.so 00:02:21.738 CC lib/rdma_provider/common.o 00:02:21.738 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:21.739 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.739 CC lib/jsonrpc/jsonrpc_client.o 00:02:21.739 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.739 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.739 LIB libspdk_rdma_provider.a 00:02:21.739 SO libspdk_rdma_provider.so.7.0 00:02:21.998 LIB libspdk_jsonrpc.a 00:02:21.998 SO libspdk_jsonrpc.so.6.0 00:02:21.998 SYMLINK libspdk_rdma_provider.so 00:02:21.998 SYMLINK libspdk_jsonrpc.so 00:02:21.998 LIB libspdk_env_dpdk.a 00:02:21.998 SO libspdk_env_dpdk.so.15.1 00:02:22.260 SYMLINK libspdk_env_dpdk.so 00:02:22.260 CC lib/rpc/rpc.o 00:02:22.520 LIB libspdk_rpc.a 00:02:22.520 SO libspdk_rpc.so.6.0 00:02:22.520 SYMLINK libspdk_rpc.so 00:02:22.779 CC lib/trace/trace.o 00:02:22.779 CC lib/trace/trace_flags.o 00:02:22.779 CC lib/trace/trace_rpc.o 00:02:22.779 CC lib/notify/notify.o 00:02:22.779 CC lib/keyring/keyring.o 00:02:22.779 CC lib/notify/notify_rpc.o 00:02:22.779 CC lib/keyring/keyring_rpc.o 00:02:23.038 LIB libspdk_notify.a 00:02:23.038 SO libspdk_notify.so.6.0 00:02:23.038 LIB libspdk_keyring.a 00:02:23.038 LIB libspdk_trace.a 00:02:23.038 SO libspdk_keyring.so.2.0 00:02:23.038 SYMLINK libspdk_notify.so 00:02:23.038 SO libspdk_trace.so.11.0 00:02:23.038 SYMLINK libspdk_keyring.so 00:02:23.038 SYMLINK libspdk_trace.so 00:02:23.607 CC lib/thread/thread.o 00:02:23.607 CC lib/thread/iobuf.o 00:02:23.607 CC lib/sock/sock.o 00:02:23.607 CC lib/sock/sock_rpc.o 00:02:23.866 LIB libspdk_sock.a 00:02:23.866 SO libspdk_sock.so.10.0 00:02:23.866 SYMLINK libspdk_sock.so 00:02:24.126 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:24.126 CC lib/nvme/nvme_ctrlr.o 00:02:24.126 CC lib/nvme/nvme_fabric.o 00:02:24.126 CC lib/nvme/nvme_ns_cmd.o 00:02:24.126 CC lib/nvme/nvme_ns.o 00:02:24.126 CC lib/nvme/nvme_pcie_common.o 00:02:24.126 CC lib/nvme/nvme_pcie.o 00:02:24.126 CC lib/nvme/nvme_qpair.o 00:02:24.126 CC lib/nvme/nvme.o 00:02:24.126 CC lib/nvme/nvme_quirks.o 00:02:24.126 CC lib/nvme/nvme_transport.o 00:02:24.126 CC lib/nvme/nvme_discovery.o 00:02:24.126 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:24.126 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:24.126 CC lib/nvme/nvme_tcp.o 00:02:24.126 CC lib/nvme/nvme_opal.o 00:02:24.126 CC lib/nvme/nvme_io_msg.o 00:02:24.126 CC lib/nvme/nvme_poll_group.o 00:02:24.126 CC lib/nvme/nvme_zns.o 00:02:24.126 CC lib/nvme/nvme_stubs.o 00:02:24.126 CC lib/nvme/nvme_auth.o 00:02:24.126 CC lib/nvme/nvme_cuse.o 00:02:24.126 CC lib/nvme/nvme_vfio_user.o 00:02:24.126 CC lib/nvme/nvme_rdma.o 00:02:24.386 LIB libspdk_thread.a 00:02:24.386 SO libspdk_thread.so.11.0 00:02:24.645 SYMLINK libspdk_thread.so 00:02:24.903 CC lib/fsdev/fsdev_io.o 00:02:24.903 CC lib/blob/blobstore.o 00:02:24.903 CC lib/fsdev/fsdev.o 00:02:24.903 CC lib/blob/request.o 00:02:24.903 CC lib/fsdev/fsdev_rpc.o 00:02:24.903 CC lib/blob/zeroes.o 00:02:24.903 CC lib/blob/blob_bs_dev.o 00:02:24.903 CC lib/virtio/virtio.o 00:02:24.903 CC lib/virtio/virtio_vhost_user.o 00:02:24.903 CC lib/virtio/virtio_vfio_user.o 00:02:24.903 CC lib/virtio/virtio_pci.o 00:02:24.903 CC lib/accel/accel.o 00:02:24.903 CC lib/accel/accel_rpc.o 00:02:24.903 CC lib/accel/accel_sw.o 00:02:24.903 CC lib/vfu_tgt/tgt_endpoint.o 00:02:24.903 CC lib/vfu_tgt/tgt_rpc.o 00:02:24.903 CC lib/init/json_config.o 00:02:24.903 CC lib/init/subsystem.o 00:02:24.903 CC lib/init/subsystem_rpc.o 00:02:24.903 CC lib/init/rpc.o 00:02:25.161 LIB libspdk_init.a 00:02:25.161 LIB libspdk_virtio.a 00:02:25.161 SO libspdk_init.so.6.0 00:02:25.161 LIB libspdk_vfu_tgt.a 00:02:25.161 SO libspdk_virtio.so.7.0 00:02:25.161 SO libspdk_vfu_tgt.so.3.0 00:02:25.161 SYMLINK libspdk_init.so 00:02:25.161 SYMLINK libspdk_virtio.so 00:02:25.161 SYMLINK libspdk_vfu_tgt.so 00:02:25.421 LIB libspdk_fsdev.a 00:02:25.421 SO libspdk_fsdev.so.2.0 00:02:25.421 SYMLINK libspdk_fsdev.so 00:02:25.421 CC lib/event/app.o 00:02:25.421 CC lib/event/reactor.o 00:02:25.421 CC lib/event/log_rpc.o 00:02:25.421 CC lib/event/app_rpc.o 00:02:25.421 CC lib/event/scheduler_static.o 00:02:25.680 LIB libspdk_accel.a 00:02:25.680 SO libspdk_accel.so.16.0 00:02:25.680 SYMLINK libspdk_accel.so 00:02:25.680 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:25.680 LIB libspdk_nvme.a 00:02:25.680 LIB libspdk_event.a 00:02:25.939 SO libspdk_nvme.so.15.0 00:02:25.939 SO libspdk_event.so.14.0 00:02:25.939 SYMLINK libspdk_event.so 00:02:25.939 SYMLINK libspdk_nvme.so 00:02:25.939 CC lib/bdev/bdev.o 00:02:25.939 CC lib/bdev/bdev_rpc.o 00:02:25.939 CC lib/bdev/bdev_zone.o 00:02:25.939 CC lib/bdev/part.o 00:02:25.939 CC lib/bdev/scsi_nvme.o 00:02:26.198 LIB libspdk_fuse_dispatcher.a 00:02:26.198 SO libspdk_fuse_dispatcher.so.1.0 00:02:26.198 SYMLINK libspdk_fuse_dispatcher.so 00:02:26.766 LIB libspdk_blob.a 00:02:26.766 SO libspdk_blob.so.12.0 00:02:27.025 SYMLINK libspdk_blob.so 00:02:27.283 CC lib/blobfs/blobfs.o 00:02:27.283 CC lib/blobfs/tree.o 00:02:27.283 CC lib/lvol/lvol.o 00:02:27.852 LIB libspdk_bdev.a 00:02:27.852 SO libspdk_bdev.so.17.0 00:02:27.852 LIB libspdk_blobfs.a 00:02:27.852 SO libspdk_blobfs.so.11.0 00:02:27.852 SYMLINK libspdk_bdev.so 00:02:27.852 LIB libspdk_lvol.a 00:02:27.852 SO libspdk_lvol.so.11.0 00:02:27.852 SYMLINK libspdk_blobfs.so 00:02:27.852 SYMLINK libspdk_lvol.so 00:02:28.114 CC lib/ublk/ublk.o 00:02:28.114 CC lib/ublk/ublk_rpc.o 00:02:28.114 CC lib/nvmf/ctrlr.o 00:02:28.114 CC lib/nvmf/ctrlr_discovery.o 00:02:28.114 CC lib/nvmf/ctrlr_bdev.o 00:02:28.114 CC lib/nvmf/subsystem.o 00:02:28.114 CC lib/nvmf/nvmf.o 00:02:28.114 CC lib/nvmf/nvmf_rpc.o 00:02:28.114 CC lib/nvmf/transport.o 00:02:28.114 CC lib/scsi/dev.o 00:02:28.114 CC lib/nvmf/tcp.o 00:02:28.114 CC lib/scsi/lun.o 00:02:28.114 CC lib/nvmf/stubs.o 00:02:28.114 CC lib/scsi/port.o 00:02:28.114 CC lib/nvmf/mdns_server.o 00:02:28.114 CC lib/scsi/scsi.o 00:02:28.114 CC lib/nvmf/vfio_user.o 00:02:28.114 CC lib/ftl/ftl_core.o 00:02:28.114 CC lib/ftl/ftl_init.o 00:02:28.114 CC lib/scsi/scsi_bdev.o 00:02:28.114 CC lib/nbd/nbd.o 00:02:28.114 CC lib/nvmf/rdma.o 00:02:28.114 CC lib/nvmf/auth.o 00:02:28.114 CC lib/scsi/scsi_pr.o 00:02:28.114 CC lib/ftl/ftl_layout.o 00:02:28.114 CC lib/nbd/nbd_rpc.o 00:02:28.114 CC lib/scsi/scsi_rpc.o 00:02:28.114 CC lib/scsi/task.o 00:02:28.114 CC lib/ftl/ftl_debug.o 00:02:28.114 CC lib/ftl/ftl_io.o 00:02:28.114 CC lib/ftl/ftl_sb.o 00:02:28.114 CC lib/ftl/ftl_l2p.o 00:02:28.114 CC lib/ftl/ftl_l2p_flat.o 00:02:28.114 CC lib/ftl/ftl_nv_cache.o 00:02:28.115 CC lib/ftl/ftl_band.o 00:02:28.115 CC lib/ftl/ftl_band_ops.o 00:02:28.115 CC lib/ftl/ftl_writer.o 00:02:28.115 CC lib/ftl/ftl_rq.o 00:02:28.115 CC lib/ftl/ftl_reloc.o 00:02:28.115 CC lib/ftl/ftl_l2p_cache.o 00:02:28.115 CC lib/ftl/ftl_p2l.o 00:02:28.115 CC lib/ftl/ftl_p2l_log.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:28.115 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:28.115 CC lib/ftl/utils/ftl_conf.o 00:02:28.374 CC lib/ftl/utils/ftl_md.o 00:02:28.374 CC lib/ftl/utils/ftl_mempool.o 00:02:28.374 CC lib/ftl/utils/ftl_property.o 00:02:28.374 CC lib/ftl/utils/ftl_bitmap.o 00:02:28.374 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:28.374 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:28.374 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:28.374 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:28.374 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.374 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:28.374 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:28.374 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:28.374 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:28.374 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:28.374 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:28.374 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:28.374 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.374 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:28.374 CC lib/ftl/base/ftl_base_dev.o 00:02:28.374 CC lib/ftl/ftl_trace.o 00:02:28.632 LIB libspdk_scsi.a 00:02:28.890 SO libspdk_scsi.so.9.0 00:02:28.890 LIB libspdk_nbd.a 00:02:28.890 SO libspdk_nbd.so.7.0 00:02:28.890 SYMLINK libspdk_scsi.so 00:02:28.890 SYMLINK libspdk_nbd.so 00:02:28.890 LIB libspdk_ublk.a 00:02:29.148 SO libspdk_ublk.so.3.0 00:02:29.148 SYMLINK libspdk_ublk.so 00:02:29.148 CC lib/iscsi/iscsi.o 00:02:29.148 CC lib/iscsi/conn.o 00:02:29.148 CC lib/iscsi/init_grp.o 00:02:29.148 CC lib/iscsi/param.o 00:02:29.148 CC lib/iscsi/portal_grp.o 00:02:29.148 CC lib/iscsi/tgt_node.o 00:02:29.148 CC lib/iscsi/iscsi_subsystem.o 00:02:29.148 CC lib/iscsi/iscsi_rpc.o 00:02:29.148 CC lib/iscsi/task.o 00:02:29.148 LIB libspdk_ftl.a 00:02:29.148 CC lib/vhost/vhost.o 00:02:29.148 CC lib/vhost/vhost_rpc.o 00:02:29.148 CC lib/vhost/vhost_scsi.o 00:02:29.148 CC lib/vhost/vhost_blk.o 00:02:29.148 CC lib/vhost/rte_vhost_user.o 00:02:29.408 SO libspdk_ftl.so.9.0 00:02:29.667 SYMLINK libspdk_ftl.so 00:02:29.926 LIB libspdk_nvmf.a 00:02:29.926 SO libspdk_nvmf.so.20.0 00:02:29.926 LIB libspdk_vhost.a 00:02:29.926 SO libspdk_vhost.so.8.0 00:02:30.185 SYMLINK libspdk_nvmf.so 00:02:30.185 SYMLINK libspdk_vhost.so 00:02:30.185 LIB libspdk_iscsi.a 00:02:30.185 SO libspdk_iscsi.so.8.0 00:02:30.185 SYMLINK libspdk_iscsi.so 00:02:30.755 CC module/vfu_device/vfu_virtio_blk.o 00:02:30.755 CC module/vfu_device/vfu_virtio.o 00:02:30.755 CC module/vfu_device/vfu_virtio_scsi.o 00:02:30.755 CC module/vfu_device/vfu_virtio_rpc.o 00:02:30.755 CC module/vfu_device/vfu_virtio_fs.o 00:02:30.755 CC module/env_dpdk/env_dpdk_rpc.o 00:02:31.012 CC module/keyring/file/keyring.o 00:02:31.012 CC module/keyring/file/keyring_rpc.o 00:02:31.012 CC module/blob/bdev/blob_bdev.o 00:02:31.012 CC module/scheduler/gscheduler/gscheduler.o 00:02:31.012 CC module/keyring/linux/keyring.o 00:02:31.012 CC module/keyring/linux/keyring_rpc.o 00:02:31.012 LIB libspdk_env_dpdk_rpc.a 00:02:31.012 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:31.012 CC module/accel/error/accel_error.o 00:02:31.012 CC module/accel/error/accel_error_rpc.o 00:02:31.012 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:31.012 CC module/accel/ioat/accel_ioat_rpc.o 00:02:31.012 CC module/accel/ioat/accel_ioat.o 00:02:31.012 CC module/accel/iaa/accel_iaa.o 00:02:31.013 CC module/accel/iaa/accel_iaa_rpc.o 00:02:31.013 CC module/accel/dsa/accel_dsa.o 00:02:31.013 CC module/accel/dsa/accel_dsa_rpc.o 00:02:31.013 CC module/fsdev/aio/fsdev_aio.o 00:02:31.013 CC module/sock/posix/posix.o 00:02:31.013 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:31.013 CC module/fsdev/aio/linux_aio_mgr.o 00:02:31.013 SO libspdk_env_dpdk_rpc.so.6.0 00:02:31.013 SYMLINK libspdk_env_dpdk_rpc.so 00:02:31.013 LIB libspdk_keyring_file.a 00:02:31.013 LIB libspdk_scheduler_gscheduler.a 00:02:31.013 LIB libspdk_scheduler_dpdk_governor.a 00:02:31.013 SO libspdk_keyring_file.so.2.0 00:02:31.013 LIB libspdk_keyring_linux.a 00:02:31.013 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:31.013 SO libspdk_scheduler_gscheduler.so.4.0 00:02:31.013 SO libspdk_keyring_linux.so.1.0 00:02:31.270 LIB libspdk_accel_iaa.a 00:02:31.270 LIB libspdk_accel_ioat.a 00:02:31.270 LIB libspdk_scheduler_dynamic.a 00:02:31.270 LIB libspdk_accel_error.a 00:02:31.270 SYMLINK libspdk_keyring_file.so 00:02:31.270 LIB libspdk_blob_bdev.a 00:02:31.271 SO libspdk_accel_iaa.so.3.0 00:02:31.271 SO libspdk_scheduler_dynamic.so.4.0 00:02:31.271 SO libspdk_accel_ioat.so.6.0 00:02:31.271 SO libspdk_accel_error.so.2.0 00:02:31.271 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:31.271 SYMLINK libspdk_scheduler_gscheduler.so 00:02:31.271 SYMLINK libspdk_keyring_linux.so 00:02:31.271 SO libspdk_blob_bdev.so.12.0 00:02:31.271 LIB libspdk_accel_dsa.a 00:02:31.271 SYMLINK libspdk_accel_ioat.so 00:02:31.271 SYMLINK libspdk_accel_iaa.so 00:02:31.271 SYMLINK libspdk_accel_error.so 00:02:31.271 SYMLINK libspdk_scheduler_dynamic.so 00:02:31.271 SO libspdk_accel_dsa.so.5.0 00:02:31.271 SYMLINK libspdk_blob_bdev.so 00:02:31.271 LIB libspdk_vfu_device.a 00:02:31.271 SO libspdk_vfu_device.so.3.0 00:02:31.271 SYMLINK libspdk_accel_dsa.so 00:02:31.271 SYMLINK libspdk_vfu_device.so 00:02:31.528 LIB libspdk_fsdev_aio.a 00:02:31.528 LIB libspdk_sock_posix.a 00:02:31.528 SO libspdk_fsdev_aio.so.1.0 00:02:31.528 SO libspdk_sock_posix.so.6.0 00:02:31.528 SYMLINK libspdk_fsdev_aio.so 00:02:31.528 SYMLINK libspdk_sock_posix.so 00:02:31.786 CC module/bdev/gpt/gpt.o 00:02:31.786 CC module/bdev/gpt/vbdev_gpt.o 00:02:31.786 CC module/bdev/lvol/vbdev_lvol.o 00:02:31.786 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:31.786 CC module/bdev/delay/vbdev_delay.o 00:02:31.787 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:31.787 CC module/bdev/passthru/vbdev_passthru.o 00:02:31.787 CC module/bdev/null/bdev_null.o 00:02:31.787 CC module/bdev/null/bdev_null_rpc.o 00:02:31.787 CC module/bdev/iscsi/bdev_iscsi.o 00:02:31.787 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:31.787 CC module/bdev/raid/bdev_raid.o 00:02:31.787 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.787 CC module/bdev/error/vbdev_error.o 00:02:31.787 CC module/bdev/raid/bdev_raid_rpc.o 00:02:31.787 CC module/bdev/raid/bdev_raid_sb.o 00:02:31.787 CC module/bdev/error/vbdev_error_rpc.o 00:02:31.787 CC module/bdev/aio/bdev_aio.o 00:02:31.787 CC module/bdev/raid/raid0.o 00:02:31.787 CC module/bdev/aio/bdev_aio_rpc.o 00:02:31.787 CC module/bdev/raid/concat.o 00:02:31.787 CC module/bdev/raid/raid1.o 00:02:31.787 CC module/bdev/split/vbdev_split.o 00:02:31.787 CC module/bdev/split/vbdev_split_rpc.o 00:02:31.787 CC module/bdev/ftl/bdev_ftl.o 00:02:31.787 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:31.787 CC module/bdev/nvme/bdev_nvme.o 00:02:31.787 CC module/blobfs/bdev/blobfs_bdev.o 00:02:31.787 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:31.787 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:31.787 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:31.787 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:31.787 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:31.787 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:31.787 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.787 CC module/bdev/nvme/nvme_rpc.o 00:02:31.787 CC module/bdev/nvme/bdev_mdns_client.o 00:02:31.787 CC module/bdev/nvme/vbdev_opal.o 00:02:31.787 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:31.787 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:31.787 CC module/bdev/malloc/bdev_malloc.o 00:02:31.787 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:32.045 LIB libspdk_bdev_gpt.a 00:02:32.045 LIB libspdk_blobfs_bdev.a 00:02:32.045 SO libspdk_bdev_gpt.so.6.0 00:02:32.045 LIB libspdk_bdev_split.a 00:02:32.045 SO libspdk_blobfs_bdev.so.6.0 00:02:32.045 LIB libspdk_bdev_error.a 00:02:32.045 LIB libspdk_bdev_null.a 00:02:32.045 SO libspdk_bdev_split.so.6.0 00:02:32.045 SO libspdk_bdev_error.so.6.0 00:02:32.045 SYMLINK libspdk_bdev_gpt.so 00:02:32.045 LIB libspdk_bdev_iscsi.a 00:02:32.045 LIB libspdk_bdev_passthru.a 00:02:32.045 LIB libspdk_bdev_zone_block.a 00:02:32.045 SO libspdk_bdev_null.so.6.0 00:02:32.045 SYMLINK libspdk_blobfs_bdev.so 00:02:32.045 LIB libspdk_bdev_ftl.a 00:02:32.045 SO libspdk_bdev_ftl.so.6.0 00:02:32.045 SYMLINK libspdk_bdev_split.so 00:02:32.045 SO libspdk_bdev_passthru.so.6.0 00:02:32.045 SO libspdk_bdev_iscsi.so.6.0 00:02:32.045 LIB libspdk_bdev_malloc.a 00:02:32.045 LIB libspdk_bdev_aio.a 00:02:32.045 SYMLINK libspdk_bdev_error.so 00:02:32.045 SO libspdk_bdev_zone_block.so.6.0 00:02:32.045 LIB libspdk_bdev_delay.a 00:02:32.045 SYMLINK libspdk_bdev_null.so 00:02:32.305 SO libspdk_bdev_malloc.so.6.0 00:02:32.305 SO libspdk_bdev_aio.so.6.0 00:02:32.305 SO libspdk_bdev_delay.so.6.0 00:02:32.305 SYMLINK libspdk_bdev_iscsi.so 00:02:32.305 SYMLINK libspdk_bdev_ftl.so 00:02:32.305 SYMLINK libspdk_bdev_zone_block.so 00:02:32.305 SYMLINK libspdk_bdev_passthru.so 00:02:32.305 LIB libspdk_bdev_lvol.a 00:02:32.305 SYMLINK libspdk_bdev_malloc.so 00:02:32.305 SO libspdk_bdev_lvol.so.6.0 00:02:32.305 SYMLINK libspdk_bdev_delay.so 00:02:32.305 SYMLINK libspdk_bdev_aio.so 00:02:32.305 LIB libspdk_bdev_virtio.a 00:02:32.305 SO libspdk_bdev_virtio.so.6.0 00:02:32.305 SYMLINK libspdk_bdev_lvol.so 00:02:32.305 SYMLINK libspdk_bdev_virtio.so 00:02:32.563 LIB libspdk_bdev_raid.a 00:02:32.563 SO libspdk_bdev_raid.so.6.0 00:02:32.821 SYMLINK libspdk_bdev_raid.so 00:02:33.388 LIB libspdk_bdev_nvme.a 00:02:33.647 SO libspdk_bdev_nvme.so.7.1 00:02:33.647 SYMLINK libspdk_bdev_nvme.so 00:02:34.216 CC module/event/subsystems/iobuf/iobuf.o 00:02:34.216 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:34.216 CC module/event/subsystems/keyring/keyring.o 00:02:34.216 CC module/event/subsystems/vmd/vmd.o 00:02:34.216 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:34.216 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:34.216 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:34.216 CC module/event/subsystems/fsdev/fsdev.o 00:02:34.216 CC module/event/subsystems/scheduler/scheduler.o 00:02:34.216 CC module/event/subsystems/sock/sock.o 00:02:34.476 LIB libspdk_event_keyring.a 00:02:34.476 LIB libspdk_event_vhost_blk.a 00:02:34.476 LIB libspdk_event_fsdev.a 00:02:34.476 LIB libspdk_event_iobuf.a 00:02:34.476 LIB libspdk_event_vfu_tgt.a 00:02:34.476 LIB libspdk_event_vmd.a 00:02:34.476 LIB libspdk_event_sock.a 00:02:34.476 LIB libspdk_event_scheduler.a 00:02:34.476 SO libspdk_event_keyring.so.1.0 00:02:34.476 SO libspdk_event_vhost_blk.so.3.0 00:02:34.476 SO libspdk_event_fsdev.so.1.0 00:02:34.476 SO libspdk_event_sock.so.5.0 00:02:34.476 SO libspdk_event_vfu_tgt.so.3.0 00:02:34.477 SO libspdk_event_iobuf.so.3.0 00:02:34.477 SO libspdk_event_vmd.so.6.0 00:02:34.477 SO libspdk_event_scheduler.so.4.0 00:02:34.477 SYMLINK libspdk_event_sock.so 00:02:34.477 SYMLINK libspdk_event_keyring.so 00:02:34.477 SYMLINK libspdk_event_fsdev.so 00:02:34.477 SYMLINK libspdk_event_vhost_blk.so 00:02:34.477 SYMLINK libspdk_event_vfu_tgt.so 00:02:34.477 SYMLINK libspdk_event_iobuf.so 00:02:34.477 SYMLINK libspdk_event_scheduler.so 00:02:34.477 SYMLINK libspdk_event_vmd.so 00:02:34.736 CC module/event/subsystems/accel/accel.o 00:02:34.996 LIB libspdk_event_accel.a 00:02:34.996 SO libspdk_event_accel.so.6.0 00:02:34.996 SYMLINK libspdk_event_accel.so 00:02:35.256 CC module/event/subsystems/bdev/bdev.o 00:02:35.516 LIB libspdk_event_bdev.a 00:02:35.516 SO libspdk_event_bdev.so.6.0 00:02:35.516 SYMLINK libspdk_event_bdev.so 00:02:36.086 CC module/event/subsystems/scsi/scsi.o 00:02:36.086 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:36.086 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:36.086 CC module/event/subsystems/nbd/nbd.o 00:02:36.086 CC module/event/subsystems/ublk/ublk.o 00:02:36.086 LIB libspdk_event_nbd.a 00:02:36.086 LIB libspdk_event_ublk.a 00:02:36.086 LIB libspdk_event_scsi.a 00:02:36.086 SO libspdk_event_ublk.so.3.0 00:02:36.086 SO libspdk_event_nbd.so.6.0 00:02:36.086 SO libspdk_event_scsi.so.6.0 00:02:36.086 LIB libspdk_event_nvmf.a 00:02:36.086 SYMLINK libspdk_event_ublk.so 00:02:36.086 SYMLINK libspdk_event_nbd.so 00:02:36.086 SO libspdk_event_nvmf.so.6.0 00:02:36.086 SYMLINK libspdk_event_scsi.so 00:02:36.346 SYMLINK libspdk_event_nvmf.so 00:02:36.604 CC module/event/subsystems/iscsi/iscsi.o 00:02:36.604 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:36.604 LIB libspdk_event_vhost_scsi.a 00:02:36.604 LIB libspdk_event_iscsi.a 00:02:36.604 SO libspdk_event_iscsi.so.6.0 00:02:36.604 SO libspdk_event_vhost_scsi.so.3.0 00:02:36.604 SYMLINK libspdk_event_vhost_scsi.so 00:02:36.604 SYMLINK libspdk_event_iscsi.so 00:02:36.864 SO libspdk.so.6.0 00:02:36.864 SYMLINK libspdk.so 00:02:37.122 CC test/rpc_client/rpc_client_test.o 00:02:37.122 CXX app/trace/trace.o 00:02:37.122 CC app/trace_record/trace_record.o 00:02:37.402 CC app/spdk_lspci/spdk_lspci.o 00:02:37.402 CC app/spdk_nvme_identify/identify.o 00:02:37.402 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.402 CC app/spdk_top/spdk_top.o 00:02:37.402 TEST_HEADER include/spdk/accel.h 00:02:37.402 TEST_HEADER include/spdk/assert.h 00:02:37.402 TEST_HEADER include/spdk/accel_module.h 00:02:37.402 TEST_HEADER include/spdk/base64.h 00:02:37.402 TEST_HEADER include/spdk/bdev_module.h 00:02:37.402 CC app/spdk_nvme_perf/perf.o 00:02:37.402 TEST_HEADER include/spdk/barrier.h 00:02:37.402 TEST_HEADER include/spdk/bdev.h 00:02:37.402 TEST_HEADER include/spdk/bdev_zone.h 00:02:37.402 TEST_HEADER include/spdk/bit_pool.h 00:02:37.402 TEST_HEADER include/spdk/bit_array.h 00:02:37.402 TEST_HEADER include/spdk/blob_bdev.h 00:02:37.402 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:37.402 TEST_HEADER include/spdk/blobfs.h 00:02:37.402 TEST_HEADER include/spdk/blob.h 00:02:37.402 TEST_HEADER include/spdk/cpuset.h 00:02:37.402 TEST_HEADER include/spdk/conf.h 00:02:37.402 TEST_HEADER include/spdk/crc16.h 00:02:37.402 TEST_HEADER include/spdk/config.h 00:02:37.402 TEST_HEADER include/spdk/crc32.h 00:02:37.402 TEST_HEADER include/spdk/crc64.h 00:02:37.402 TEST_HEADER include/spdk/dif.h 00:02:37.402 TEST_HEADER include/spdk/dma.h 00:02:37.402 TEST_HEADER include/spdk/endian.h 00:02:37.402 CC app/spdk_dd/spdk_dd.o 00:02:37.402 TEST_HEADER include/spdk/env_dpdk.h 00:02:37.402 TEST_HEADER include/spdk/event.h 00:02:37.402 TEST_HEADER include/spdk/fd_group.h 00:02:37.402 TEST_HEADER include/spdk/env.h 00:02:37.402 TEST_HEADER include/spdk/file.h 00:02:37.402 TEST_HEADER include/spdk/fd.h 00:02:37.402 TEST_HEADER include/spdk/fsdev_module.h 00:02:37.402 TEST_HEADER include/spdk/fsdev.h 00:02:37.403 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:37.403 TEST_HEADER include/spdk/ftl.h 00:02:37.403 TEST_HEADER include/spdk/gpt_spec.h 00:02:37.403 TEST_HEADER include/spdk/hexlify.h 00:02:37.403 TEST_HEADER include/spdk/histogram_data.h 00:02:37.403 TEST_HEADER include/spdk/idxd.h 00:02:37.403 TEST_HEADER include/spdk/ioat.h 00:02:37.403 TEST_HEADER include/spdk/idxd_spec.h 00:02:37.403 TEST_HEADER include/spdk/init.h 00:02:37.403 TEST_HEADER include/spdk/json.h 00:02:37.403 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.403 TEST_HEADER include/spdk/ioat_spec.h 00:02:37.403 TEST_HEADER include/spdk/jsonrpc.h 00:02:37.403 TEST_HEADER include/spdk/iscsi_spec.h 00:02:37.403 TEST_HEADER include/spdk/keyring_module.h 00:02:37.403 TEST_HEADER include/spdk/log.h 00:02:37.403 TEST_HEADER include/spdk/keyring.h 00:02:37.403 TEST_HEADER include/spdk/likely.h 00:02:37.403 TEST_HEADER include/spdk/lvol.h 00:02:37.403 TEST_HEADER include/spdk/md5.h 00:02:37.403 TEST_HEADER include/spdk/memory.h 00:02:37.403 CC app/nvmf_tgt/nvmf_main.o 00:02:37.403 TEST_HEADER include/spdk/net.h 00:02:37.403 TEST_HEADER include/spdk/nbd.h 00:02:37.403 TEST_HEADER include/spdk/notify.h 00:02:37.403 TEST_HEADER include/spdk/nvme.h 00:02:37.403 TEST_HEADER include/spdk/mmio.h 00:02:37.403 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:37.403 TEST_HEADER include/spdk/nvme_intel.h 00:02:37.403 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:37.403 TEST_HEADER include/spdk/nvme_zns.h 00:02:37.403 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:37.403 TEST_HEADER include/spdk/nvme_spec.h 00:02:37.403 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:37.403 CC app/iscsi_tgt/iscsi_tgt.o 00:02:37.403 TEST_HEADER include/spdk/nvmf_spec.h 00:02:37.403 TEST_HEADER include/spdk/nvmf.h 00:02:37.403 CC app/spdk_tgt/spdk_tgt.o 00:02:37.403 TEST_HEADER include/spdk/nvmf_transport.h 00:02:37.403 TEST_HEADER include/spdk/opal_spec.h 00:02:37.403 TEST_HEADER include/spdk/pci_ids.h 00:02:37.403 TEST_HEADER include/spdk/pipe.h 00:02:37.403 TEST_HEADER include/spdk/opal.h 00:02:37.403 TEST_HEADER include/spdk/queue.h 00:02:37.403 TEST_HEADER include/spdk/reduce.h 00:02:37.403 TEST_HEADER include/spdk/scheduler.h 00:02:37.403 TEST_HEADER include/spdk/scsi.h 00:02:37.403 TEST_HEADER include/spdk/scsi_spec.h 00:02:37.403 TEST_HEADER include/spdk/rpc.h 00:02:37.403 TEST_HEADER include/spdk/sock.h 00:02:37.403 TEST_HEADER include/spdk/stdinc.h 00:02:37.403 TEST_HEADER include/spdk/thread.h 00:02:37.403 TEST_HEADER include/spdk/string.h 00:02:37.403 TEST_HEADER include/spdk/trace_parser.h 00:02:37.403 TEST_HEADER include/spdk/ublk.h 00:02:37.403 TEST_HEADER include/spdk/trace.h 00:02:37.403 TEST_HEADER include/spdk/tree.h 00:02:37.403 TEST_HEADER include/spdk/uuid.h 00:02:37.403 TEST_HEADER include/spdk/util.h 00:02:37.403 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:37.403 TEST_HEADER include/spdk/version.h 00:02:37.403 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:37.403 TEST_HEADER include/spdk/xor.h 00:02:37.403 TEST_HEADER include/spdk/vhost.h 00:02:37.403 CXX test/cpp_headers/accel.o 00:02:37.403 TEST_HEADER include/spdk/vmd.h 00:02:37.403 TEST_HEADER include/spdk/zipf.h 00:02:37.403 CXX test/cpp_headers/barrier.o 00:02:37.403 CXX test/cpp_headers/accel_module.o 00:02:37.403 CXX test/cpp_headers/assert.o 00:02:37.403 CXX test/cpp_headers/base64.o 00:02:37.403 CXX test/cpp_headers/bdev.o 00:02:37.403 CXX test/cpp_headers/bdev_module.o 00:02:37.403 CXX test/cpp_headers/bit_array.o 00:02:37.403 CXX test/cpp_headers/bdev_zone.o 00:02:37.403 CXX test/cpp_headers/blob_bdev.o 00:02:37.403 CXX test/cpp_headers/blob.o 00:02:37.403 CXX test/cpp_headers/bit_pool.o 00:02:37.403 CXX test/cpp_headers/conf.o 00:02:37.403 CXX test/cpp_headers/cpuset.o 00:02:37.403 CXX test/cpp_headers/blobfs_bdev.o 00:02:37.403 CXX test/cpp_headers/crc32.o 00:02:37.403 CXX test/cpp_headers/crc16.o 00:02:37.403 CXX test/cpp_headers/blobfs.o 00:02:37.403 CXX test/cpp_headers/crc64.o 00:02:37.403 CXX test/cpp_headers/dif.o 00:02:37.403 CXX test/cpp_headers/dma.o 00:02:37.403 CXX test/cpp_headers/env_dpdk.o 00:02:37.403 CXX test/cpp_headers/endian.o 00:02:37.403 CXX test/cpp_headers/config.o 00:02:37.403 CXX test/cpp_headers/event.o 00:02:37.403 CXX test/cpp_headers/fd.o 00:02:37.403 CXX test/cpp_headers/fd_group.o 00:02:37.403 CXX test/cpp_headers/env.o 00:02:37.403 CXX test/cpp_headers/file.o 00:02:37.403 CXX test/cpp_headers/fsdev_module.o 00:02:37.403 CXX test/cpp_headers/fsdev.o 00:02:37.403 CXX test/cpp_headers/fuse_dispatcher.o 00:02:37.403 CXX test/cpp_headers/ftl.o 00:02:37.403 CXX test/cpp_headers/gpt_spec.o 00:02:37.403 CXX test/cpp_headers/hexlify.o 00:02:37.403 CXX test/cpp_headers/histogram_data.o 00:02:37.403 CXX test/cpp_headers/idxd.o 00:02:37.403 CXX test/cpp_headers/init.o 00:02:37.403 CXX test/cpp_headers/ioat.o 00:02:37.403 CXX test/cpp_headers/idxd_spec.o 00:02:37.403 CXX test/cpp_headers/json.o 00:02:37.403 CXX test/cpp_headers/ioat_spec.o 00:02:37.403 CXX test/cpp_headers/iscsi_spec.o 00:02:37.403 CXX test/cpp_headers/jsonrpc.o 00:02:37.403 CXX test/cpp_headers/keyring.o 00:02:37.403 CXX test/cpp_headers/keyring_module.o 00:02:37.403 CXX test/cpp_headers/likely.o 00:02:37.403 CXX test/cpp_headers/log.o 00:02:37.403 CXX test/cpp_headers/lvol.o 00:02:37.403 CXX test/cpp_headers/md5.o 00:02:37.403 CXX test/cpp_headers/memory.o 00:02:37.403 CXX test/cpp_headers/mmio.o 00:02:37.403 CXX test/cpp_headers/net.o 00:02:37.403 CXX test/cpp_headers/nbd.o 00:02:37.403 CXX test/cpp_headers/notify.o 00:02:37.403 CXX test/cpp_headers/nvme.o 00:02:37.403 CXX test/cpp_headers/nvme_intel.o 00:02:37.403 CXX test/cpp_headers/nvme_ocssd.o 00:02:37.403 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:37.403 CXX test/cpp_headers/nvmf_cmd.o 00:02:37.403 CXX test/cpp_headers/nvme_spec.o 00:02:37.403 CXX test/cpp_headers/nvme_zns.o 00:02:37.403 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:37.403 CXX test/cpp_headers/nvmf.o 00:02:37.403 CXX test/cpp_headers/nvmf_spec.o 00:02:37.403 CC test/app/jsoncat/jsoncat.o 00:02:37.403 CXX test/cpp_headers/nvmf_transport.o 00:02:37.403 CXX test/cpp_headers/opal.o 00:02:37.403 CXX test/cpp_headers/opal_spec.o 00:02:37.403 CXX test/cpp_headers/pipe.o 00:02:37.403 CXX test/cpp_headers/pci_ids.o 00:02:37.403 CXX test/cpp_headers/queue.o 00:02:37.403 CXX test/cpp_headers/reduce.o 00:02:37.403 CXX test/cpp_headers/rpc.o 00:02:37.403 CXX test/cpp_headers/scsi.o 00:02:37.403 CXX test/cpp_headers/scheduler.o 00:02:37.403 CXX test/cpp_headers/scsi_spec.o 00:02:37.403 CXX test/cpp_headers/sock.o 00:02:37.403 CXX test/cpp_headers/stdinc.o 00:02:37.403 CXX test/cpp_headers/string.o 00:02:37.403 CXX test/cpp_headers/thread.o 00:02:37.403 CXX test/cpp_headers/trace.o 00:02:37.403 CXX test/cpp_headers/trace_parser.o 00:02:37.403 CXX test/cpp_headers/tree.o 00:02:37.403 CC test/app/stub/stub.o 00:02:37.403 CC examples/ioat/perf/perf.o 00:02:37.403 CC examples/ioat/verify/verify.o 00:02:37.403 CC test/app/histogram_perf/histogram_perf.o 00:02:37.403 CC app/fio/nvme/fio_plugin.o 00:02:37.693 CC test/env/vtophys/vtophys.o 00:02:37.693 CXX test/cpp_headers/ublk.o 00:02:37.693 CC test/app/bdev_svc/bdev_svc.o 00:02:37.693 CC test/env/pci/pci_ut.o 00:02:37.693 CC test/thread/poller_perf/poller_perf.o 00:02:37.693 CC test/env/memory/memory_ut.o 00:02:37.693 CC app/fio/bdev/fio_plugin.o 00:02:37.693 CC examples/util/zipf/zipf.o 00:02:37.693 CC test/dma/test_dma/test_dma.o 00:02:37.693 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:37.693 LINK rpc_client_test 00:02:37.693 LINK spdk_lspci 00:02:37.959 LINK nvmf_tgt 00:02:37.959 LINK interrupt_tgt 00:02:37.959 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:37.959 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:37.959 CXX test/cpp_headers/util.o 00:02:37.959 CXX test/cpp_headers/uuid.o 00:02:37.960 CXX test/cpp_headers/vfio_user_pci.o 00:02:37.960 CXX test/cpp_headers/version.o 00:02:37.960 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:37.960 CXX test/cpp_headers/vfio_user_spec.o 00:02:37.960 CXX test/cpp_headers/vhost.o 00:02:37.960 CXX test/cpp_headers/vmd.o 00:02:37.960 CXX test/cpp_headers/xor.o 00:02:37.960 LINK spdk_nvme_discover 00:02:37.960 LINK vtophys 00:02:37.960 CXX test/cpp_headers/zipf.o 00:02:38.220 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:38.220 CC test/env/mem_callbacks/mem_callbacks.o 00:02:38.220 LINK jsoncat 00:02:38.220 LINK stub 00:02:38.220 LINK bdev_svc 00:02:38.220 LINK spdk_trace_record 00:02:38.220 LINK verify 00:02:38.220 LINK env_dpdk_post_init 00:02:38.220 LINK histogram_perf 00:02:38.220 LINK iscsi_tgt 00:02:38.220 LINK spdk_tgt 00:02:38.220 LINK spdk_dd 00:02:38.220 LINK poller_perf 00:02:38.220 LINK zipf 00:02:38.220 LINK ioat_perf 00:02:38.479 LINK spdk_trace 00:02:38.479 LINK pci_ut 00:02:38.479 LINK test_dma 00:02:38.479 LINK nvme_fuzz 00:02:38.479 LINK vhost_fuzz 00:02:38.479 LINK spdk_bdev 00:02:38.479 LINK spdk_nvme 00:02:38.760 CC test/event/event_perf/event_perf.o 00:02:38.760 CC test/event/reactor_perf/reactor_perf.o 00:02:38.760 CC test/event/reactor/reactor.o 00:02:38.760 CC examples/vmd/led/led.o 00:02:38.760 CC examples/vmd/lsvmd/lsvmd.o 00:02:38.760 CC test/event/app_repeat/app_repeat.o 00:02:38.760 CC examples/idxd/perf/perf.o 00:02:38.760 LINK mem_callbacks 00:02:38.760 CC test/event/scheduler/scheduler.o 00:02:38.760 CC examples/sock/hello_world/hello_sock.o 00:02:38.760 LINK spdk_nvme_perf 00:02:38.760 LINK spdk_nvme_identify 00:02:38.760 CC app/vhost/vhost.o 00:02:38.760 LINK spdk_top 00:02:38.760 CC examples/thread/thread/thread_ex.o 00:02:38.760 LINK event_perf 00:02:38.760 LINK lsvmd 00:02:38.760 LINK reactor 00:02:38.760 LINK reactor_perf 00:02:38.760 LINK led 00:02:38.760 LINK app_repeat 00:02:39.018 LINK vhost 00:02:39.018 LINK scheduler 00:02:39.018 CC test/nvme/aer/aer.o 00:02:39.018 LINK hello_sock 00:02:39.018 CC test/nvme/overhead/overhead.o 00:02:39.018 CC test/nvme/cuse/cuse.o 00:02:39.018 CC test/nvme/boot_partition/boot_partition.o 00:02:39.018 CC test/nvme/e2edp/nvme_dp.o 00:02:39.018 CC test/nvme/fused_ordering/fused_ordering.o 00:02:39.018 CC test/nvme/reset/reset.o 00:02:39.018 CC test/nvme/reserve/reserve.o 00:02:39.018 CC test/nvme/sgl/sgl.o 00:02:39.018 CC test/nvme/err_injection/err_injection.o 00:02:39.018 CC test/nvme/compliance/nvme_compliance.o 00:02:39.018 CC test/nvme/simple_copy/simple_copy.o 00:02:39.018 CC test/nvme/fdp/fdp.o 00:02:39.018 LINK idxd_perf 00:02:39.018 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:39.018 CC test/accel/dif/dif.o 00:02:39.018 CC test/nvme/startup/startup.o 00:02:39.018 CC test/nvme/connect_stress/connect_stress.o 00:02:39.018 CC test/blobfs/mkfs/mkfs.o 00:02:39.018 LINK thread 00:02:39.018 LINK memory_ut 00:02:39.018 CC test/lvol/esnap/esnap.o 00:02:39.018 LINK boot_partition 00:02:39.276 LINK err_injection 00:02:39.276 LINK startup 00:02:39.276 LINK fused_ordering 00:02:39.276 LINK doorbell_aers 00:02:39.276 LINK connect_stress 00:02:39.276 LINK reserve 00:02:39.276 LINK simple_copy 00:02:39.276 LINK reset 00:02:39.276 LINK aer 00:02:39.276 LINK overhead 00:02:39.276 LINK nvme_dp 00:02:39.276 LINK mkfs 00:02:39.276 LINK sgl 00:02:39.276 LINK fdp 00:02:39.276 LINK nvme_compliance 00:02:39.276 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:39.276 CC examples/nvme/hello_world/hello_world.o 00:02:39.276 CC examples/nvme/arbitration/arbitration.o 00:02:39.276 CC examples/nvme/hotplug/hotplug.o 00:02:39.276 CC examples/nvme/abort/abort.o 00:02:39.276 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:39.276 CC examples/nvme/reconnect/reconnect.o 00:02:39.276 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:39.534 LINK iscsi_fuzz 00:02:39.534 CC examples/accel/perf/accel_perf.o 00:02:39.534 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:39.534 CC examples/blob/hello_world/hello_blob.o 00:02:39.534 CC examples/blob/cli/blobcli.o 00:02:39.534 LINK dif 00:02:39.534 LINK hello_world 00:02:39.534 LINK cmb_copy 00:02:39.534 LINK pmr_persistence 00:02:39.534 LINK hotplug 00:02:39.534 LINK arbitration 00:02:39.792 LINK reconnect 00:02:39.792 LINK abort 00:02:39.792 LINK hello_blob 00:02:39.792 LINK nvme_manage 00:02:39.792 LINK hello_fsdev 00:02:39.792 LINK accel_perf 00:02:40.051 LINK blobcli 00:02:40.051 LINK cuse 00:02:40.051 CC test/bdev/bdevio/bdevio.o 00:02:40.308 CC examples/bdev/bdevperf/bdevperf.o 00:02:40.308 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.308 LINK bdevio 00:02:40.566 LINK hello_bdev 00:02:40.824 LINK bdevperf 00:02:41.392 CC examples/nvmf/nvmf/nvmf.o 00:02:41.652 LINK nvmf 00:02:42.588 LINK esnap 00:02:42.588 00:02:42.588 real 0m52.743s 00:02:42.588 user 7m56.313s 00:02:42.588 sys 3m49.584s 00:02:42.588 11:04:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:42.588 11:04:15 make -- common/autotest_common.sh@10 -- $ set +x 00:02:42.588 ************************************ 00:02:42.588 END TEST make 00:02:42.588 ************************************ 00:02:42.846 11:04:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:42.846 11:04:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:42.846 11:04:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:42.846 11:04:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.846 11:04:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:42.846 11:04:15 -- pm/common@44 -- $ pid=1427210 00:02:42.846 11:04:15 -- pm/common@50 -- $ kill -TERM 1427210 00:02:42.846 11:04:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.846 11:04:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:42.846 11:04:15 -- pm/common@44 -- $ pid=1427211 00:02:42.846 11:04:15 -- pm/common@50 -- $ kill -TERM 1427211 00:02:42.846 11:04:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.846 11:04:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:42.846 11:04:15 -- pm/common@44 -- $ pid=1427214 00:02:42.846 11:04:15 -- pm/common@50 -- $ kill -TERM 1427214 00:02:42.846 11:04:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.846 11:04:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:42.846 11:04:15 -- pm/common@44 -- $ pid=1427237 00:02:42.846 11:04:15 -- pm/common@50 -- $ sudo -E kill -TERM 1427237 00:02:42.846 11:04:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:42.846 11:04:15 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:42.846 11:04:15 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:42.846 11:04:15 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:42.846 11:04:15 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:42.846 11:04:15 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:42.846 11:04:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:42.846 11:04:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:42.846 11:04:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:42.846 11:04:15 -- scripts/common.sh@336 -- # IFS=.-: 00:02:42.846 11:04:15 -- scripts/common.sh@336 -- # read -ra ver1 00:02:42.846 11:04:15 -- scripts/common.sh@337 -- # IFS=.-: 00:02:42.846 11:04:15 -- scripts/common.sh@337 -- # read -ra ver2 00:02:42.846 11:04:15 -- scripts/common.sh@338 -- # local 'op=<' 00:02:42.846 11:04:15 -- scripts/common.sh@340 -- # ver1_l=2 00:02:42.846 11:04:15 -- scripts/common.sh@341 -- # ver2_l=1 00:02:42.846 11:04:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:42.846 11:04:15 -- scripts/common.sh@344 -- # case "$op" in 00:02:42.846 11:04:15 -- scripts/common.sh@345 -- # : 1 00:02:42.846 11:04:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:42.846 11:04:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:42.846 11:04:15 -- scripts/common.sh@365 -- # decimal 1 00:02:42.846 11:04:15 -- scripts/common.sh@353 -- # local d=1 00:02:42.846 11:04:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:42.846 11:04:15 -- scripts/common.sh@355 -- # echo 1 00:02:42.846 11:04:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:42.846 11:04:15 -- scripts/common.sh@366 -- # decimal 2 00:02:42.846 11:04:15 -- scripts/common.sh@353 -- # local d=2 00:02:42.846 11:04:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:42.846 11:04:15 -- scripts/common.sh@355 -- # echo 2 00:02:42.846 11:04:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:42.846 11:04:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:42.846 11:04:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:42.846 11:04:15 -- scripts/common.sh@368 -- # return 0 00:02:42.846 11:04:15 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:42.846 11:04:15 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:42.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:42.846 --rc genhtml_branch_coverage=1 00:02:42.846 --rc genhtml_function_coverage=1 00:02:42.846 --rc genhtml_legend=1 00:02:42.846 --rc geninfo_all_blocks=1 00:02:42.846 --rc geninfo_unexecuted_blocks=1 00:02:42.846 00:02:42.846 ' 00:02:42.846 11:04:15 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:42.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:42.846 --rc genhtml_branch_coverage=1 00:02:42.847 --rc genhtml_function_coverage=1 00:02:42.847 --rc genhtml_legend=1 00:02:42.847 --rc geninfo_all_blocks=1 00:02:42.847 --rc geninfo_unexecuted_blocks=1 00:02:42.847 00:02:42.847 ' 00:02:42.847 11:04:15 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:42.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:42.847 --rc genhtml_branch_coverage=1 00:02:42.847 --rc genhtml_function_coverage=1 00:02:42.847 --rc genhtml_legend=1 00:02:42.847 --rc geninfo_all_blocks=1 00:02:42.847 --rc geninfo_unexecuted_blocks=1 00:02:42.847 00:02:42.847 ' 00:02:42.847 11:04:15 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:42.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:42.847 --rc genhtml_branch_coverage=1 00:02:42.847 --rc genhtml_function_coverage=1 00:02:42.847 --rc genhtml_legend=1 00:02:42.847 --rc geninfo_all_blocks=1 00:02:42.847 --rc geninfo_unexecuted_blocks=1 00:02:42.847 00:02:42.847 ' 00:02:42.847 11:04:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:42.847 11:04:15 -- nvmf/common.sh@7 -- # uname -s 00:02:42.847 11:04:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:42.847 11:04:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:42.847 11:04:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:42.847 11:04:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:42.847 11:04:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:42.847 11:04:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:42.847 11:04:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:42.847 11:04:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:42.847 11:04:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:42.847 11:04:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:43.106 11:04:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:02:43.106 11:04:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:02:43.106 11:04:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:43.106 11:04:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:43.106 11:04:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:43.106 11:04:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:43.106 11:04:15 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:43.106 11:04:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:43.106 11:04:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:43.106 11:04:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.106 11:04:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.106 11:04:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.106 11:04:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.106 11:04:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.106 11:04:15 -- paths/export.sh@5 -- # export PATH 00:02:43.106 11:04:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.106 11:04:15 -- nvmf/common.sh@51 -- # : 0 00:02:43.106 11:04:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:43.106 11:04:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:43.106 11:04:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:43.106 11:04:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:43.106 11:04:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:43.106 11:04:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:43.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:43.106 11:04:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:43.106 11:04:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:43.106 11:04:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:43.106 11:04:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:43.106 11:04:15 -- spdk/autotest.sh@32 -- # uname -s 00:02:43.106 11:04:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:43.106 11:04:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:43.106 11:04:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:43.106 11:04:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:43.106 11:04:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:43.106 11:04:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:43.106 11:04:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:43.106 11:04:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:43.106 11:04:15 -- spdk/autotest.sh@48 -- # udevadm_pid=1490844 00:02:43.106 11:04:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:43.106 11:04:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:43.106 11:04:15 -- pm/common@17 -- # local monitor 00:02:43.106 11:04:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.106 11:04:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.106 11:04:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.106 11:04:15 -- pm/common@21 -- # date +%s 00:02:43.106 11:04:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.106 11:04:15 -- pm/common@21 -- # date +%s 00:02:43.106 11:04:15 -- pm/common@25 -- # sleep 1 00:02:43.106 11:04:15 -- pm/common@21 -- # date +%s 00:02:43.106 11:04:15 -- pm/common@21 -- # date +%s 00:02:43.106 11:04:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733479455 00:02:43.106 11:04:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733479455 00:02:43.106 11:04:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733479455 00:02:43.106 11:04:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733479455 00:02:43.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733479455_collect-cpu-load.pm.log 00:02:43.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733479455_collect-vmstat.pm.log 00:02:43.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733479455_collect-cpu-temp.pm.log 00:02:43.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733479455_collect-bmc-pm.bmc.pm.log 00:02:44.045 11:04:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:44.045 11:04:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:44.045 11:04:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:44.045 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:02:44.045 11:04:16 -- spdk/autotest.sh@59 -- # create_test_list 00:02:44.045 11:04:16 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:44.045 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:02:44.045 11:04:16 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:44.045 11:04:16 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.045 11:04:16 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.045 11:04:16 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:44.045 11:04:16 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.045 11:04:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:44.045 11:04:16 -- common/autotest_common.sh@1457 -- # uname 00:02:44.045 11:04:16 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:44.045 11:04:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:44.045 11:04:16 -- common/autotest_common.sh@1477 -- # uname 00:02:44.045 11:04:16 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:44.045 11:04:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:44.045 11:04:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:44.045 lcov: LCOV version 1.15 00:02:44.045 11:04:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:56.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:08.596 11:04:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:08.596 11:04:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:08.596 11:04:40 -- common/autotest_common.sh@10 -- # set +x 00:03:08.596 11:04:40 -- spdk/autotest.sh@78 -- # rm -f 00:03:08.596 11:04:40 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.504 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:03:10.504 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:10.504 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:10.504 11:04:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:10.504 11:04:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:10.504 11:04:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:10.504 11:04:43 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:10.504 11:04:43 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:10.504 11:04:43 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:10.504 11:04:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:10.504 11:04:43 -- common/autotest_common.sh@1669 -- # bdf=0000:86:00.0 00:03:10.504 11:04:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:10.504 11:04:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:10.504 11:04:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:10.504 11:04:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.504 11:04:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:10.504 11:04:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:10.504 11:04:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:10.504 11:04:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:10.504 11:04:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:10.504 11:04:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:10.504 11:04:43 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:10.763 No valid GPT data, bailing 00:03:10.763 11:04:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.763 11:04:43 -- scripts/common.sh@394 -- # pt= 00:03:10.763 11:04:43 -- scripts/common.sh@395 -- # return 1 00:03:10.763 11:04:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:10.763 1+0 records in 00:03:10.763 1+0 records out 00:03:10.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582707 s, 180 MB/s 00:03:10.763 11:04:43 -- spdk/autotest.sh@105 -- # sync 00:03:10.763 11:04:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:10.763 11:04:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:10.763 11:04:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:17.335 11:04:49 -- spdk/autotest.sh@111 -- # uname -s 00:03:17.335 11:04:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:17.335 11:04:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:17.335 11:04:49 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.887 Hugepages 00:03:19.887 node hugesize free / total 00:03:19.887 node0 1048576kB 0 / 0 00:03:19.887 node0 2048kB 0 / 0 00:03:19.887 node1 1048576kB 0 / 0 00:03:19.887 node1 2048kB 0 / 0 00:03:19.887 00:03:19.887 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.887 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:19.887 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:19.887 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:19.887 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:19.887 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:19.887 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:19.887 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:19.887 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:19.887 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:19.887 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:19.887 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:19.887 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:19.887 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:19.887 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:19.887 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:19.887 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:19.887 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:19.887 11:04:52 -- spdk/autotest.sh@117 -- # uname -s 00:03:19.887 11:04:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:19.887 11:04:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:19.887 11:04:52 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.181 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:23.181 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.748 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.748 11:04:56 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:24.687 11:04:57 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:24.687 11:04:57 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:24.687 11:04:57 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:24.687 11:04:57 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:24.687 11:04:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:24.687 11:04:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:24.687 11:04:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:24.687 11:04:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:24.687 11:04:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:24.946 11:04:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:24.946 11:04:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:03:24.946 11:04:57 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.485 Waiting for block devices as requested 00:03:27.745 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:03:27.745 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:28.004 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:28.004 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:28.004 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:28.004 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:28.263 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:28.263 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:28.263 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:28.521 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:28.521 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:28.521 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:28.780 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:28.780 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:28.780 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:28.780 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:29.039 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:29.039 11:05:01 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:29.039 11:05:01 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:03:29.039 11:05:01 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:29.039 11:05:01 -- common/autotest_common.sh@1487 -- # grep 0000:86:00.0/nvme/nvme 00:03:29.039 11:05:01 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:03:29.039 11:05:01 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:03:29.039 11:05:01 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:03:29.039 11:05:01 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:29.039 11:05:01 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:29.039 11:05:01 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:29.039 11:05:01 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:29.039 11:05:01 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:29.039 11:05:01 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:29.039 11:05:01 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:29.039 11:05:01 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:29.039 11:05:01 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:29.039 11:05:01 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:29.039 11:05:01 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:29.039 11:05:01 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:29.039 11:05:01 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:29.039 11:05:01 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:29.039 11:05:01 -- common/autotest_common.sh@1543 -- # continue 00:03:29.039 11:05:01 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:29.039 11:05:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:29.039 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:03:29.039 11:05:01 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:29.039 11:05:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.039 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:03:29.039 11:05:01 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.334 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.334 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.900 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.161 11:05:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:33.161 11:05:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:33.161 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:03:33.161 11:05:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:33.161 11:05:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:33.161 11:05:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:33.161 11:05:05 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:33.161 11:05:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:33.161 11:05:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:33.161 11:05:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:33.161 11:05:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:33.161 11:05:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:33.161 11:05:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:33.161 11:05:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:33.161 11:05:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:33.161 11:05:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:33.161 11:05:06 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:33.161 11:05:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:03:33.161 11:05:06 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:33.161 11:05:06 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:03:33.161 11:05:06 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:33.161 11:05:06 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:33.161 11:05:06 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:33.161 11:05:06 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:33.161 11:05:06 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:86:00.0 00:03:33.162 11:05:06 -- common/autotest_common.sh@1579 -- # [[ -z 0000:86:00.0 ]] 00:03:33.162 11:05:06 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1505831 00:03:33.162 11:05:06 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:33.162 11:05:06 -- common/autotest_common.sh@1585 -- # waitforlisten 1505831 00:03:33.162 11:05:06 -- common/autotest_common.sh@835 -- # '[' -z 1505831 ']' 00:03:33.162 11:05:06 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.162 11:05:06 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:33.162 11:05:06 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.162 11:05:06 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:33.162 11:05:06 -- common/autotest_common.sh@10 -- # set +x 00:03:33.421 [2024-12-06 11:05:06.129239] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:03:33.421 [2024-12-06 11:05:06.129294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505831 ] 00:03:33.421 [2024-12-06 11:05:06.201160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.421 [2024-12-06 11:05:06.241199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.680 11:05:06 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:33.680 11:05:06 -- common/autotest_common.sh@868 -- # return 0 00:03:33.680 11:05:06 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:33.680 11:05:06 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:33.680 11:05:06 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:03:36.970 nvme0n1 00:03:36.970 11:05:09 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:36.970 [2024-12-06 11:05:09.623751] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:36.970 request: 00:03:36.970 { 00:03:36.970 "nvme_ctrlr_name": "nvme0", 00:03:36.970 "password": "test", 00:03:36.970 "method": "bdev_nvme_opal_revert", 00:03:36.970 "req_id": 1 00:03:36.970 } 00:03:36.970 Got JSON-RPC error response 00:03:36.970 response: 00:03:36.970 { 00:03:36.970 "code": -32602, 00:03:36.970 "message": "Invalid parameters" 00:03:36.970 } 00:03:36.970 11:05:09 -- common/autotest_common.sh@1591 -- # true 00:03:36.970 11:05:09 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:36.970 11:05:09 -- common/autotest_common.sh@1595 -- # killprocess 1505831 00:03:36.970 11:05:09 -- common/autotest_common.sh@954 -- # '[' -z 1505831 ']' 00:03:36.970 11:05:09 -- common/autotest_common.sh@958 -- # kill -0 1505831 00:03:36.970 11:05:09 -- common/autotest_common.sh@959 -- # uname 00:03:36.970 11:05:09 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:36.970 11:05:09 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1505831 00:03:36.970 11:05:09 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:36.971 11:05:09 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:36.971 11:05:09 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1505831' 00:03:36.971 killing process with pid 1505831 00:03:36.971 11:05:09 -- common/autotest_common.sh@973 -- # kill 1505831 00:03:36.971 11:05:09 -- common/autotest_common.sh@978 -- # wait 1505831 00:03:38.875 11:05:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:38.875 11:05:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:38.875 11:05:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:38.875 11:05:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:38.875 11:05:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:38.875 11:05:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.875 11:05:11 -- common/autotest_common.sh@10 -- # set +x 00:03:38.875 11:05:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:38.875 11:05:11 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:38.875 11:05:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.875 11:05:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.875 11:05:11 -- common/autotest_common.sh@10 -- # set +x 00:03:38.875 ************************************ 00:03:38.875 START TEST env 00:03:38.875 ************************************ 00:03:38.875 11:05:11 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:38.875 * Looking for test storage... 00:03:38.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:38.875 11:05:11 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:38.875 11:05:11 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:38.875 11:05:11 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:38.875 11:05:11 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:38.876 11:05:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:38.876 11:05:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:38.876 11:05:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:38.876 11:05:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:38.876 11:05:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:38.876 11:05:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:38.876 11:05:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:38.876 11:05:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:38.876 11:05:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:38.876 11:05:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:38.876 11:05:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:38.876 11:05:11 env -- scripts/common.sh@344 -- # case "$op" in 00:03:38.876 11:05:11 env -- scripts/common.sh@345 -- # : 1 00:03:38.876 11:05:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:38.876 11:05:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:38.876 11:05:11 env -- scripts/common.sh@365 -- # decimal 1 00:03:38.876 11:05:11 env -- scripts/common.sh@353 -- # local d=1 00:03:38.876 11:05:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:38.876 11:05:11 env -- scripts/common.sh@355 -- # echo 1 00:03:38.876 11:05:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:38.876 11:05:11 env -- scripts/common.sh@366 -- # decimal 2 00:03:38.876 11:05:11 env -- scripts/common.sh@353 -- # local d=2 00:03:38.876 11:05:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:38.876 11:05:11 env -- scripts/common.sh@355 -- # echo 2 00:03:38.876 11:05:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:38.876 11:05:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:38.876 11:05:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:38.876 11:05:11 env -- scripts/common.sh@368 -- # return 0 00:03:38.876 11:05:11 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:38.876 11:05:11 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:38.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.876 --rc genhtml_branch_coverage=1 00:03:38.876 --rc genhtml_function_coverage=1 00:03:38.876 --rc genhtml_legend=1 00:03:38.876 --rc geninfo_all_blocks=1 00:03:38.876 --rc geninfo_unexecuted_blocks=1 00:03:38.876 00:03:38.876 ' 00:03:38.876 11:05:11 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:38.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.876 --rc genhtml_branch_coverage=1 00:03:38.876 --rc genhtml_function_coverage=1 00:03:38.876 --rc genhtml_legend=1 00:03:38.876 --rc geninfo_all_blocks=1 00:03:38.876 --rc geninfo_unexecuted_blocks=1 00:03:38.876 00:03:38.876 ' 00:03:38.876 11:05:11 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:38.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.876 --rc genhtml_branch_coverage=1 00:03:38.876 --rc genhtml_function_coverage=1 00:03:38.876 --rc genhtml_legend=1 00:03:38.876 --rc geninfo_all_blocks=1 00:03:38.876 --rc geninfo_unexecuted_blocks=1 00:03:38.876 00:03:38.876 ' 00:03:38.876 11:05:11 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:38.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.876 --rc genhtml_branch_coverage=1 00:03:38.876 --rc genhtml_function_coverage=1 00:03:38.876 --rc genhtml_legend=1 00:03:38.876 --rc geninfo_all_blocks=1 00:03:38.876 --rc geninfo_unexecuted_blocks=1 00:03:38.876 00:03:38.876 ' 00:03:38.876 11:05:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:38.876 11:05:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.876 11:05:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.876 11:05:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:38.876 ************************************ 00:03:38.876 START TEST env_memory 00:03:38.876 ************************************ 00:03:38.876 11:05:11 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:38.876 00:03:38.876 00:03:38.876 CUnit - A unit testing framework for C - Version 2.1-3 00:03:38.876 http://cunit.sourceforge.net/ 00:03:38.876 00:03:38.876 00:03:38.876 Suite: memory 00:03:38.876 Test: alloc and free memory map ...[2024-12-06 11:05:11.645819] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:38.876 passed 00:03:38.876 Test: mem map translation ...[2024-12-06 11:05:11.663461] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:38.876 [2024-12-06 11:05:11.663477] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:38.876 [2024-12-06 11:05:11.663510] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:38.876 [2024-12-06 11:05:11.663516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:38.876 passed 00:03:38.876 Test: mem map registration ...[2024-12-06 11:05:11.698241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:38.876 [2024-12-06 11:05:11.698255] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:38.876 passed 00:03:38.876 Test: mem map adjacent registrations ...passed 00:03:38.876 00:03:38.876 Run Summary: Type Total Ran Passed Failed Inactive 00:03:38.876 suites 1 1 n/a 0 0 00:03:38.876 tests 4 4 4 0 0 00:03:38.876 asserts 152 152 152 0 n/a 00:03:38.876 00:03:38.876 Elapsed time = 0.130 seconds 00:03:38.876 00:03:38.876 real 0m0.143s 00:03:38.876 user 0m0.134s 00:03:38.876 sys 0m0.009s 00:03:38.876 11:05:11 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.876 11:05:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:38.876 ************************************ 00:03:38.876 END TEST env_memory 00:03:38.876 ************************************ 00:03:38.876 11:05:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:38.876 11:05:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.876 11:05:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.876 11:05:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:38.876 ************************************ 00:03:38.876 START TEST env_vtophys 00:03:38.876 ************************************ 00:03:38.876 11:05:11 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.136 EAL: lib.eal log level changed from notice to debug 00:03:39.136 EAL: Detected lcore 0 as core 0 on socket 0 00:03:39.136 EAL: Detected lcore 1 as core 1 on socket 0 00:03:39.136 EAL: Detected lcore 2 as core 2 on socket 0 00:03:39.136 EAL: Detected lcore 3 as core 3 on socket 0 00:03:39.136 EAL: Detected lcore 4 as core 4 on socket 0 00:03:39.136 EAL: Detected lcore 5 as core 5 on socket 0 00:03:39.136 EAL: Detected lcore 6 as core 6 on socket 0 00:03:39.136 EAL: Detected lcore 7 as core 8 on socket 0 00:03:39.136 EAL: Detected lcore 8 as core 9 on socket 0 00:03:39.136 EAL: Detected lcore 9 as core 10 on socket 0 00:03:39.136 EAL: Detected lcore 10 as core 11 on socket 0 00:03:39.136 EAL: Detected lcore 11 as core 12 on socket 0 00:03:39.136 EAL: Detected lcore 12 as core 13 on socket 0 00:03:39.136 EAL: Detected lcore 13 as core 14 on socket 0 00:03:39.136 EAL: Detected lcore 14 as core 16 on socket 0 00:03:39.136 EAL: Detected lcore 15 as core 17 on socket 0 00:03:39.136 EAL: Detected lcore 16 as core 18 on socket 0 00:03:39.136 EAL: Detected lcore 17 as core 19 on socket 0 00:03:39.136 EAL: Detected lcore 18 as core 20 on socket 0 00:03:39.136 EAL: Detected lcore 19 as core 21 on socket 0 00:03:39.136 EAL: Detected lcore 20 as core 22 on socket 0 00:03:39.136 EAL: Detected lcore 21 as core 24 on socket 0 00:03:39.136 EAL: Detected lcore 22 as core 25 on socket 0 00:03:39.136 EAL: Detected lcore 23 as core 26 on socket 0 00:03:39.136 EAL: Detected lcore 24 as core 27 on socket 0 00:03:39.136 EAL: Detected lcore 25 as core 28 on socket 0 00:03:39.136 EAL: Detected lcore 26 as core 29 on socket 0 00:03:39.136 EAL: Detected lcore 27 as core 30 on socket 0 00:03:39.136 EAL: Detected lcore 28 as core 0 on socket 1 00:03:39.136 EAL: Detected lcore 29 as core 1 on socket 1 00:03:39.136 EAL: Detected lcore 30 as core 2 on socket 1 00:03:39.136 EAL: Detected lcore 31 as core 3 on socket 1 00:03:39.136 EAL: Detected lcore 32 as core 4 on socket 1 00:03:39.136 EAL: Detected lcore 33 as core 5 on socket 1 00:03:39.136 EAL: Detected lcore 34 as core 6 on socket 1 00:03:39.136 EAL: Detected lcore 35 as core 8 on socket 1 00:03:39.136 EAL: Detected lcore 36 as core 9 on socket 1 00:03:39.136 EAL: Detected lcore 37 as core 10 on socket 1 00:03:39.136 EAL: Detected lcore 38 as core 11 on socket 1 00:03:39.136 EAL: Detected lcore 39 as core 12 on socket 1 00:03:39.136 EAL: Detected lcore 40 as core 13 on socket 1 00:03:39.136 EAL: Detected lcore 41 as core 14 on socket 1 00:03:39.136 EAL: Detected lcore 42 as core 16 on socket 1 00:03:39.136 EAL: Detected lcore 43 as core 17 on socket 1 00:03:39.136 EAL: Detected lcore 44 as core 18 on socket 1 00:03:39.136 EAL: Detected lcore 45 as core 19 on socket 1 00:03:39.136 EAL: Detected lcore 46 as core 20 on socket 1 00:03:39.136 EAL: Detected lcore 47 as core 21 on socket 1 00:03:39.136 EAL: Detected lcore 48 as core 22 on socket 1 00:03:39.136 EAL: Detected lcore 49 as core 24 on socket 1 00:03:39.136 EAL: Detected lcore 50 as core 25 on socket 1 00:03:39.136 EAL: Detected lcore 51 as core 26 on socket 1 00:03:39.136 EAL: Detected lcore 52 as core 27 on socket 1 00:03:39.136 EAL: Detected lcore 53 as core 28 on socket 1 00:03:39.136 EAL: Detected lcore 54 as core 29 on socket 1 00:03:39.136 EAL: Detected lcore 55 as core 30 on socket 1 00:03:39.136 EAL: Detected lcore 56 as core 0 on socket 0 00:03:39.136 EAL: Detected lcore 57 as core 1 on socket 0 00:03:39.136 EAL: Detected lcore 58 as core 2 on socket 0 00:03:39.136 EAL: Detected lcore 59 as core 3 on socket 0 00:03:39.136 EAL: Detected lcore 60 as core 4 on socket 0 00:03:39.136 EAL: Detected lcore 61 as core 5 on socket 0 00:03:39.136 EAL: Detected lcore 62 as core 6 on socket 0 00:03:39.136 EAL: Detected lcore 63 as core 8 on socket 0 00:03:39.136 EAL: Detected lcore 64 as core 9 on socket 0 00:03:39.136 EAL: Detected lcore 65 as core 10 on socket 0 00:03:39.136 EAL: Detected lcore 66 as core 11 on socket 0 00:03:39.136 EAL: Detected lcore 67 as core 12 on socket 0 00:03:39.136 EAL: Detected lcore 68 as core 13 on socket 0 00:03:39.136 EAL: Detected lcore 69 as core 14 on socket 0 00:03:39.136 EAL: Detected lcore 70 as core 16 on socket 0 00:03:39.136 EAL: Detected lcore 71 as core 17 on socket 0 00:03:39.136 EAL: Detected lcore 72 as core 18 on socket 0 00:03:39.136 EAL: Detected lcore 73 as core 19 on socket 0 00:03:39.136 EAL: Detected lcore 74 as core 20 on socket 0 00:03:39.136 EAL: Detected lcore 75 as core 21 on socket 0 00:03:39.136 EAL: Detected lcore 76 as core 22 on socket 0 00:03:39.136 EAL: Detected lcore 77 as core 24 on socket 0 00:03:39.136 EAL: Detected lcore 78 as core 25 on socket 0 00:03:39.136 EAL: Detected lcore 79 as core 26 on socket 0 00:03:39.136 EAL: Detected lcore 80 as core 27 on socket 0 00:03:39.136 EAL: Detected lcore 81 as core 28 on socket 0 00:03:39.136 EAL: Detected lcore 82 as core 29 on socket 0 00:03:39.136 EAL: Detected lcore 83 as core 30 on socket 0 00:03:39.136 EAL: Detected lcore 84 as core 0 on socket 1 00:03:39.137 EAL: Detected lcore 85 as core 1 on socket 1 00:03:39.137 EAL: Detected lcore 86 as core 2 on socket 1 00:03:39.137 EAL: Detected lcore 87 as core 3 on socket 1 00:03:39.137 EAL: Detected lcore 88 as core 4 on socket 1 00:03:39.137 EAL: Detected lcore 89 as core 5 on socket 1 00:03:39.137 EAL: Detected lcore 90 as core 6 on socket 1 00:03:39.137 EAL: Detected lcore 91 as core 8 on socket 1 00:03:39.137 EAL: Detected lcore 92 as core 9 on socket 1 00:03:39.137 EAL: Detected lcore 93 as core 10 on socket 1 00:03:39.137 EAL: Detected lcore 94 as core 11 on socket 1 00:03:39.137 EAL: Detected lcore 95 as core 12 on socket 1 00:03:39.137 EAL: Detected lcore 96 as core 13 on socket 1 00:03:39.137 EAL: Detected lcore 97 as core 14 on socket 1 00:03:39.137 EAL: Detected lcore 98 as core 16 on socket 1 00:03:39.137 EAL: Detected lcore 99 as core 17 on socket 1 00:03:39.137 EAL: Detected lcore 100 as core 18 on socket 1 00:03:39.137 EAL: Detected lcore 101 as core 19 on socket 1 00:03:39.137 EAL: Detected lcore 102 as core 20 on socket 1 00:03:39.137 EAL: Detected lcore 103 as core 21 on socket 1 00:03:39.137 EAL: Detected lcore 104 as core 22 on socket 1 00:03:39.137 EAL: Detected lcore 105 as core 24 on socket 1 00:03:39.137 EAL: Detected lcore 106 as core 25 on socket 1 00:03:39.137 EAL: Detected lcore 107 as core 26 on socket 1 00:03:39.137 EAL: Detected lcore 108 as core 27 on socket 1 00:03:39.137 EAL: Detected lcore 109 as core 28 on socket 1 00:03:39.137 EAL: Detected lcore 110 as core 29 on socket 1 00:03:39.137 EAL: Detected lcore 111 as core 30 on socket 1 00:03:39.137 EAL: Maximum logical cores by configuration: 128 00:03:39.137 EAL: Detected CPU lcores: 112 00:03:39.137 EAL: Detected NUMA nodes: 2 00:03:39.137 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:39.137 EAL: Detected shared linkage of DPDK 00:03:39.137 EAL: No shared files mode enabled, IPC will be disabled 00:03:39.137 EAL: Bus pci wants IOVA as 'DC' 00:03:39.137 EAL: Buses did not request a specific IOVA mode. 00:03:39.137 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:39.137 EAL: Selected IOVA mode 'VA' 00:03:39.137 EAL: Probing VFIO support... 00:03:39.137 EAL: IOMMU type 1 (Type 1) is supported 00:03:39.137 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:39.137 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:39.137 EAL: VFIO support initialized 00:03:39.137 EAL: Ask a virtual area of 0x2e000 bytes 00:03:39.137 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:39.137 EAL: Setting up physically contiguous memory... 00:03:39.137 EAL: Setting maximum number of open files to 524288 00:03:39.137 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:39.137 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:39.137 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:39.137 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.137 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:39.137 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.137 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.137 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:39.137 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:39.137 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.137 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:39.137 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.137 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.137 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:39.137 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:39.137 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.137 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:39.137 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.137 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.137 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:39.137 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:39.137 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.137 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:39.137 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.137 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.137 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:39.137 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:39.137 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:39.137 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.137 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:39.137 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.137 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.137 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:39.137 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:39.137 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.137 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:39.137 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.137 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.137 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:39.137 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:39.137 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.137 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:39.137 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.137 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.137 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:39.137 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:39.137 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.137 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:39.137 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.137 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.137 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:39.137 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:39.137 EAL: Hugepages will be freed exactly as allocated. 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: TSC frequency is ~2200000 KHz 00:03:39.137 EAL: Main lcore 0 is ready (tid=7f11f9836a00;cpuset=[0]) 00:03:39.137 EAL: Trying to obtain current memory policy. 00:03:39.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.137 EAL: Restoring previous memory policy: 0 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was expanded by 2MB 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:39.137 EAL: Mem event callback 'spdk:(nil)' registered 00:03:39.137 00:03:39.137 00:03:39.137 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.137 http://cunit.sourceforge.net/ 00:03:39.137 00:03:39.137 00:03:39.137 Suite: components_suite 00:03:39.137 Test: vtophys_malloc_test ...passed 00:03:39.137 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:39.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.137 EAL: Restoring previous memory policy: 4 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was expanded by 4MB 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was shrunk by 4MB 00:03:39.137 EAL: Trying to obtain current memory policy. 00:03:39.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.137 EAL: Restoring previous memory policy: 4 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was expanded by 6MB 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was shrunk by 6MB 00:03:39.137 EAL: Trying to obtain current memory policy. 00:03:39.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.137 EAL: Restoring previous memory policy: 4 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was expanded by 10MB 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was shrunk by 10MB 00:03:39.137 EAL: Trying to obtain current memory policy. 00:03:39.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.137 EAL: Restoring previous memory policy: 4 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was expanded by 18MB 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was shrunk by 18MB 00:03:39.137 EAL: Trying to obtain current memory policy. 00:03:39.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.137 EAL: Restoring previous memory policy: 4 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was expanded by 34MB 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.137 EAL: Heap on socket 0 was shrunk by 34MB 00:03:39.137 EAL: Trying to obtain current memory policy. 00:03:39.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.137 EAL: Restoring previous memory policy: 4 00:03:39.137 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.137 EAL: request: mp_malloc_sync 00:03:39.137 EAL: No shared files mode enabled, IPC is disabled 00:03:39.138 EAL: Heap on socket 0 was expanded by 66MB 00:03:39.138 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.138 EAL: request: mp_malloc_sync 00:03:39.138 EAL: No shared files mode enabled, IPC is disabled 00:03:39.138 EAL: Heap on socket 0 was shrunk by 66MB 00:03:39.138 EAL: Trying to obtain current memory policy. 00:03:39.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.138 EAL: Restoring previous memory policy: 4 00:03:39.138 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.138 EAL: request: mp_malloc_sync 00:03:39.138 EAL: No shared files mode enabled, IPC is disabled 00:03:39.138 EAL: Heap on socket 0 was expanded by 130MB 00:03:39.138 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.138 EAL: request: mp_malloc_sync 00:03:39.138 EAL: No shared files mode enabled, IPC is disabled 00:03:39.138 EAL: Heap on socket 0 was shrunk by 130MB 00:03:39.138 EAL: Trying to obtain current memory policy. 00:03:39.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.138 EAL: Restoring previous memory policy: 4 00:03:39.138 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.138 EAL: request: mp_malloc_sync 00:03:39.138 EAL: No shared files mode enabled, IPC is disabled 00:03:39.138 EAL: Heap on socket 0 was expanded by 258MB 00:03:39.398 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.398 EAL: request: mp_malloc_sync 00:03:39.398 EAL: No shared files mode enabled, IPC is disabled 00:03:39.398 EAL: Heap on socket 0 was shrunk by 258MB 00:03:39.398 EAL: Trying to obtain current memory policy. 00:03:39.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.398 EAL: Restoring previous memory policy: 4 00:03:39.398 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.398 EAL: request: mp_malloc_sync 00:03:39.398 EAL: No shared files mode enabled, IPC is disabled 00:03:39.398 EAL: Heap on socket 0 was expanded by 514MB 00:03:39.398 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.657 EAL: request: mp_malloc_sync 00:03:39.657 EAL: No shared files mode enabled, IPC is disabled 00:03:39.657 EAL: Heap on socket 0 was shrunk by 514MB 00:03:39.657 EAL: Trying to obtain current memory policy. 00:03:39.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.657 EAL: Restoring previous memory policy: 4 00:03:39.657 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.657 EAL: request: mp_malloc_sync 00:03:39.657 EAL: No shared files mode enabled, IPC is disabled 00:03:39.657 EAL: Heap on socket 0 was expanded by 1026MB 00:03:39.917 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.177 EAL: request: mp_malloc_sync 00:03:40.177 EAL: No shared files mode enabled, IPC is disabled 00:03:40.177 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:40.177 passed 00:03:40.177 00:03:40.177 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.177 suites 1 1 n/a 0 0 00:03:40.177 tests 2 2 2 0 0 00:03:40.177 asserts 497 497 497 0 n/a 00:03:40.177 00:03:40.177 Elapsed time = 0.957 seconds 00:03:40.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.177 EAL: request: mp_malloc_sync 00:03:40.177 EAL: No shared files mode enabled, IPC is disabled 00:03:40.177 EAL: Heap on socket 0 was shrunk by 2MB 00:03:40.177 EAL: No shared files mode enabled, IPC is disabled 00:03:40.177 EAL: No shared files mode enabled, IPC is disabled 00:03:40.177 EAL: No shared files mode enabled, IPC is disabled 00:03:40.177 00:03:40.177 real 0m1.082s 00:03:40.177 user 0m0.642s 00:03:40.177 sys 0m0.417s 00:03:40.177 11:05:12 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.177 11:05:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:40.177 ************************************ 00:03:40.177 END TEST env_vtophys 00:03:40.177 ************************************ 00:03:40.177 11:05:12 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:40.177 11:05:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.177 11:05:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.177 11:05:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.177 ************************************ 00:03:40.177 START TEST env_pci 00:03:40.177 ************************************ 00:03:40.177 11:05:12 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:40.177 00:03:40.177 00:03:40.177 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.177 http://cunit.sourceforge.net/ 00:03:40.177 00:03:40.177 00:03:40.177 Suite: pci 00:03:40.177 Test: pci_hook ...[2024-12-06 11:05:12.980434] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1507092 has claimed it 00:03:40.177 EAL: Cannot find device (10000:00:01.0) 00:03:40.177 EAL: Failed to attach device on primary process 00:03:40.177 passed 00:03:40.177 00:03:40.177 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.177 suites 1 1 n/a 0 0 00:03:40.177 tests 1 1 1 0 0 00:03:40.177 asserts 25 25 25 0 n/a 00:03:40.177 00:03:40.177 Elapsed time = 0.027 seconds 00:03:40.177 00:03:40.177 real 0m0.047s 00:03:40.177 user 0m0.014s 00:03:40.177 sys 0m0.033s 00:03:40.177 11:05:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.177 11:05:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:40.177 ************************************ 00:03:40.177 END TEST env_pci 00:03:40.177 ************************************ 00:03:40.177 11:05:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:40.177 11:05:13 env -- env/env.sh@15 -- # uname 00:03:40.177 11:05:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:40.177 11:05:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:40.177 11:05:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:40.177 11:05:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:40.177 11:05:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.177 11:05:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.177 ************************************ 00:03:40.177 START TEST env_dpdk_post_init 00:03:40.177 ************************************ 00:03:40.177 11:05:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:40.437 EAL: Detected CPU lcores: 112 00:03:40.437 EAL: Detected NUMA nodes: 2 00:03:40.437 EAL: Detected shared linkage of DPDK 00:03:40.437 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:40.437 EAL: Selected IOVA mode 'VA' 00:03:40.437 EAL: VFIO support initialized 00:03:40.437 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:40.437 EAL: Using IOMMU type 1 (Type 1) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:40.696 EAL: Ignore mapping IO port bar(1) 00:03:40.696 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:40.696 EAL: Ignore mapping IO port bar(1) 00:03:40.696 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:40.696 EAL: Ignore mapping IO port bar(1) 00:03:40.696 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:41.264 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:03:44.553 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:03:44.553 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:03:44.810 Starting DPDK initialization... 00:03:44.810 Starting SPDK post initialization... 00:03:44.810 SPDK NVMe probe 00:03:44.810 Attaching to 0000:86:00.0 00:03:44.810 Attached to 0000:86:00.0 00:03:44.810 Cleaning up... 00:03:44.810 00:03:44.810 real 0m4.403s 00:03:44.810 user 0m3.012s 00:03:44.810 sys 0m0.461s 00:03:44.810 11:05:17 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.810 11:05:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:44.810 ************************************ 00:03:44.810 END TEST env_dpdk_post_init 00:03:44.810 ************************************ 00:03:44.810 11:05:17 env -- env/env.sh@26 -- # uname 00:03:44.810 11:05:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:44.810 11:05:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:44.810 11:05:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.810 11:05:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.810 11:05:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.810 ************************************ 00:03:44.810 START TEST env_mem_callbacks 00:03:44.810 ************************************ 00:03:44.810 11:05:17 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:44.810 EAL: Detected CPU lcores: 112 00:03:44.810 EAL: Detected NUMA nodes: 2 00:03:44.810 EAL: Detected shared linkage of DPDK 00:03:44.810 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.810 EAL: Selected IOVA mode 'VA' 00:03:44.810 EAL: VFIO support initialized 00:03:44.810 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.810 00:03:44.810 00:03:44.810 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.810 http://cunit.sourceforge.net/ 00:03:44.810 00:03:44.810 00:03:44.810 Suite: memory 00:03:44.810 Test: test ... 00:03:44.810 register 0x200000200000 2097152 00:03:44.810 malloc 3145728 00:03:44.810 register 0x200000400000 4194304 00:03:44.810 buf 0x200000500000 len 3145728 PASSED 00:03:44.810 malloc 64 00:03:44.810 buf 0x2000004fff40 len 64 PASSED 00:03:44.810 malloc 4194304 00:03:44.810 register 0x200000800000 6291456 00:03:44.810 buf 0x200000a00000 len 4194304 PASSED 00:03:44.810 free 0x200000500000 3145728 00:03:44.810 free 0x2000004fff40 64 00:03:44.810 unregister 0x200000400000 4194304 PASSED 00:03:44.810 free 0x200000a00000 4194304 00:03:44.810 unregister 0x200000800000 6291456 PASSED 00:03:44.810 malloc 8388608 00:03:44.810 register 0x200000400000 10485760 00:03:44.810 buf 0x200000600000 len 8388608 PASSED 00:03:44.810 free 0x200000600000 8388608 00:03:44.810 unregister 0x200000400000 10485760 PASSED 00:03:44.810 passed 00:03:44.810 00:03:44.810 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.810 suites 1 1 n/a 0 0 00:03:44.810 tests 1 1 1 0 0 00:03:44.810 asserts 15 15 15 0 n/a 00:03:44.810 00:03:44.810 Elapsed time = 0.008 seconds 00:03:44.810 00:03:44.810 real 0m0.060s 00:03:44.810 user 0m0.020s 00:03:44.810 sys 0m0.040s 00:03:44.810 11:05:17 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.810 11:05:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:44.810 ************************************ 00:03:44.810 END TEST env_mem_callbacks 00:03:44.810 ************************************ 00:03:44.810 00:03:44.810 real 0m6.274s 00:03:44.810 user 0m4.064s 00:03:44.810 sys 0m1.292s 00:03:44.810 11:05:17 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.810 11:05:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.810 ************************************ 00:03:44.810 END TEST env 00:03:44.810 ************************************ 00:03:44.810 11:05:17 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:44.810 11:05:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.810 11:05:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.810 11:05:17 -- common/autotest_common.sh@10 -- # set +x 00:03:44.810 ************************************ 00:03:44.810 START TEST rpc 00:03:44.810 ************************************ 00:03:44.810 11:05:17 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:45.068 * Looking for test storage... 00:03:45.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:45.068 11:05:17 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:45.068 11:05:17 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:45.068 11:05:17 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:45.068 11:05:17 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:45.068 11:05:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.068 11:05:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.068 11:05:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.068 11:05:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.068 11:05:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.068 11:05:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.068 11:05:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.068 11:05:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.068 11:05:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.068 11:05:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.068 11:05:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.068 11:05:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:45.068 11:05:17 rpc -- scripts/common.sh@345 -- # : 1 00:03:45.068 11:05:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.068 11:05:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.068 11:05:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:45.068 11:05:17 rpc -- scripts/common.sh@353 -- # local d=1 00:03:45.068 11:05:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.068 11:05:17 rpc -- scripts/common.sh@355 -- # echo 1 00:03:45.068 11:05:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.068 11:05:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:45.068 11:05:17 rpc -- scripts/common.sh@353 -- # local d=2 00:03:45.068 11:05:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.068 11:05:17 rpc -- scripts/common.sh@355 -- # echo 2 00:03:45.068 11:05:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.068 11:05:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.068 11:05:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.068 11:05:17 rpc -- scripts/common.sh@368 -- # return 0 00:03:45.068 11:05:17 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.068 11:05:17 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.068 --rc genhtml_branch_coverage=1 00:03:45.068 --rc genhtml_function_coverage=1 00:03:45.068 --rc genhtml_legend=1 00:03:45.068 --rc geninfo_all_blocks=1 00:03:45.068 --rc geninfo_unexecuted_blocks=1 00:03:45.068 00:03:45.068 ' 00:03:45.068 11:05:17 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.068 --rc genhtml_branch_coverage=1 00:03:45.068 --rc genhtml_function_coverage=1 00:03:45.068 --rc genhtml_legend=1 00:03:45.068 --rc geninfo_all_blocks=1 00:03:45.068 --rc geninfo_unexecuted_blocks=1 00:03:45.068 00:03:45.068 ' 00:03:45.068 11:05:17 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.068 --rc genhtml_branch_coverage=1 00:03:45.069 --rc genhtml_function_coverage=1 00:03:45.069 --rc genhtml_legend=1 00:03:45.069 --rc geninfo_all_blocks=1 00:03:45.069 --rc geninfo_unexecuted_blocks=1 00:03:45.069 00:03:45.069 ' 00:03:45.069 11:05:17 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:45.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.069 --rc genhtml_branch_coverage=1 00:03:45.069 --rc genhtml_function_coverage=1 00:03:45.069 --rc genhtml_legend=1 00:03:45.069 --rc geninfo_all_blocks=1 00:03:45.069 --rc geninfo_unexecuted_blocks=1 00:03:45.069 00:03:45.069 ' 00:03:45.069 11:05:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1508266 00:03:45.069 11:05:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:45.069 11:05:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.069 11:05:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1508266 00:03:45.069 11:05:17 rpc -- common/autotest_common.sh@835 -- # '[' -z 1508266 ']' 00:03:45.069 11:05:17 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.069 11:05:17 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:45.069 11:05:17 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.069 11:05:17 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:45.069 11:05:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.069 [2024-12-06 11:05:17.974291] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:03:45.069 [2024-12-06 11:05:17.974340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508266 ] 00:03:45.326 [2024-12-06 11:05:18.044180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.326 [2024-12-06 11:05:18.082888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:45.326 [2024-12-06 11:05:18.082922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1508266' to capture a snapshot of events at runtime. 00:03:45.326 [2024-12-06 11:05:18.082929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:45.326 [2024-12-06 11:05:18.082934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:45.326 [2024-12-06 11:05:18.082939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1508266 for offline analysis/debug. 00:03:45.326 [2024-12-06 11:05:18.083476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.893 11:05:18 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:45.893 11:05:18 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:45.893 11:05:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:45.893 11:05:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:45.893 11:05:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:45.893 11:05:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:45.893 11:05:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.893 11:05:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.893 11:05:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.893 ************************************ 00:03:45.893 START TEST rpc_integrity 00:03:45.893 ************************************ 00:03:45.893 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:45.893 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:45.893 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.893 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.893 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.893 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:45.893 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.153 { 00:03:46.153 "name": "Malloc0", 00:03:46.153 "aliases": [ 00:03:46.153 "22e660e0-43b5-4321-92af-a1cf09cadabb" 00:03:46.153 ], 00:03:46.153 "product_name": "Malloc disk", 00:03:46.153 "block_size": 512, 00:03:46.153 "num_blocks": 16384, 00:03:46.153 "uuid": "22e660e0-43b5-4321-92af-a1cf09cadabb", 00:03:46.153 "assigned_rate_limits": { 00:03:46.153 "rw_ios_per_sec": 0, 00:03:46.153 "rw_mbytes_per_sec": 0, 00:03:46.153 "r_mbytes_per_sec": 0, 00:03:46.153 "w_mbytes_per_sec": 0 00:03:46.153 }, 00:03:46.153 "claimed": false, 00:03:46.153 "zoned": false, 00:03:46.153 "supported_io_types": { 00:03:46.153 "read": true, 00:03:46.153 "write": true, 00:03:46.153 "unmap": true, 00:03:46.153 "flush": true, 00:03:46.153 "reset": true, 00:03:46.153 "nvme_admin": false, 00:03:46.153 "nvme_io": false, 00:03:46.153 "nvme_io_md": false, 00:03:46.153 "write_zeroes": true, 00:03:46.153 "zcopy": true, 00:03:46.153 "get_zone_info": false, 00:03:46.153 "zone_management": false, 00:03:46.153 "zone_append": false, 00:03:46.153 "compare": false, 00:03:46.153 "compare_and_write": false, 00:03:46.153 "abort": true, 00:03:46.153 "seek_hole": false, 00:03:46.153 "seek_data": false, 00:03:46.153 "copy": true, 00:03:46.153 "nvme_iov_md": false 00:03:46.153 }, 00:03:46.153 "memory_domains": [ 00:03:46.153 { 00:03:46.153 "dma_device_id": "system", 00:03:46.153 "dma_device_type": 1 00:03:46.153 }, 00:03:46.153 { 00:03:46.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.153 "dma_device_type": 2 00:03:46.153 } 00:03:46.153 ], 00:03:46.153 "driver_specific": {} 00:03:46.153 } 00:03:46.153 ]' 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.153 [2024-12-06 11:05:18.925038] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:46.153 [2024-12-06 11:05:18.925073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.153 [2024-12-06 11:05:18.925084] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8c85c0 00:03:46.153 [2024-12-06 11:05:18.925090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.153 [2024-12-06 11:05:18.926127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.153 [2024-12-06 11:05:18.926148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.153 Passthru0 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.153 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.153 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.153 { 00:03:46.153 "name": "Malloc0", 00:03:46.153 "aliases": [ 00:03:46.153 "22e660e0-43b5-4321-92af-a1cf09cadabb" 00:03:46.153 ], 00:03:46.153 "product_name": "Malloc disk", 00:03:46.153 "block_size": 512, 00:03:46.153 "num_blocks": 16384, 00:03:46.153 "uuid": "22e660e0-43b5-4321-92af-a1cf09cadabb", 00:03:46.153 "assigned_rate_limits": { 00:03:46.153 "rw_ios_per_sec": 0, 00:03:46.153 "rw_mbytes_per_sec": 0, 00:03:46.153 "r_mbytes_per_sec": 0, 00:03:46.153 "w_mbytes_per_sec": 0 00:03:46.153 }, 00:03:46.153 "claimed": true, 00:03:46.153 "claim_type": "exclusive_write", 00:03:46.153 "zoned": false, 00:03:46.153 "supported_io_types": { 00:03:46.153 "read": true, 00:03:46.153 "write": true, 00:03:46.153 "unmap": true, 00:03:46.153 "flush": true, 00:03:46.153 "reset": true, 00:03:46.153 "nvme_admin": false, 00:03:46.153 "nvme_io": false, 00:03:46.153 "nvme_io_md": false, 00:03:46.153 "write_zeroes": true, 00:03:46.153 "zcopy": true, 00:03:46.153 "get_zone_info": false, 00:03:46.153 "zone_management": false, 00:03:46.153 "zone_append": false, 00:03:46.153 "compare": false, 00:03:46.153 "compare_and_write": false, 00:03:46.153 "abort": true, 00:03:46.153 "seek_hole": false, 00:03:46.153 "seek_data": false, 00:03:46.153 "copy": true, 00:03:46.153 "nvme_iov_md": false 00:03:46.153 }, 00:03:46.153 "memory_domains": [ 00:03:46.153 { 00:03:46.153 "dma_device_id": "system", 00:03:46.153 "dma_device_type": 1 00:03:46.153 }, 00:03:46.153 { 00:03:46.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.153 "dma_device_type": 2 00:03:46.153 } 00:03:46.153 ], 00:03:46.153 "driver_specific": {} 00:03:46.153 }, 00:03:46.153 { 00:03:46.153 "name": "Passthru0", 00:03:46.153 "aliases": [ 00:03:46.153 "9fce4e92-af34-5618-9505-9fec3a438635" 00:03:46.153 ], 00:03:46.153 "product_name": "passthru", 00:03:46.153 "block_size": 512, 00:03:46.153 "num_blocks": 16384, 00:03:46.153 "uuid": "9fce4e92-af34-5618-9505-9fec3a438635", 00:03:46.153 "assigned_rate_limits": { 00:03:46.153 "rw_ios_per_sec": 0, 00:03:46.153 "rw_mbytes_per_sec": 0, 00:03:46.153 "r_mbytes_per_sec": 0, 00:03:46.154 "w_mbytes_per_sec": 0 00:03:46.154 }, 00:03:46.154 "claimed": false, 00:03:46.154 "zoned": false, 00:03:46.154 "supported_io_types": { 00:03:46.154 "read": true, 00:03:46.154 "write": true, 00:03:46.154 "unmap": true, 00:03:46.154 "flush": true, 00:03:46.154 "reset": true, 00:03:46.154 "nvme_admin": false, 00:03:46.154 "nvme_io": false, 00:03:46.154 "nvme_io_md": false, 00:03:46.154 "write_zeroes": true, 00:03:46.154 "zcopy": true, 00:03:46.154 "get_zone_info": false, 00:03:46.154 "zone_management": false, 00:03:46.154 "zone_append": false, 00:03:46.154 "compare": false, 00:03:46.154 "compare_and_write": false, 00:03:46.154 "abort": true, 00:03:46.154 "seek_hole": false, 00:03:46.154 "seek_data": false, 00:03:46.154 "copy": true, 00:03:46.154 "nvme_iov_md": false 00:03:46.154 }, 00:03:46.154 "memory_domains": [ 00:03:46.154 { 00:03:46.154 "dma_device_id": "system", 00:03:46.154 "dma_device_type": 1 00:03:46.154 }, 00:03:46.154 { 00:03:46.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.154 "dma_device_type": 2 00:03:46.154 } 00:03:46.154 ], 00:03:46.154 "driver_specific": { 00:03:46.154 "passthru": { 00:03:46.154 "name": "Passthru0", 00:03:46.154 "base_bdev_name": "Malloc0" 00:03:46.154 } 00:03:46.154 } 00:03:46.154 } 00:03:46.154 ]' 00:03:46.154 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:46.154 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.154 11:05:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.154 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.154 11:05:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.154 11:05:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.154 11:05:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:46.154 11:05:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.154 11:05:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.154 11:05:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.154 11:05:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.154 11:05:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.154 11:05:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.154 11:05:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.154 11:05:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.154 11:05:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:46.154 11:05:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:46.154 00:03:46.154 real 0m0.257s 00:03:46.154 user 0m0.164s 00:03:46.154 sys 0m0.034s 00:03:46.154 11:05:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.154 11:05:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.154 ************************************ 00:03:46.154 END TEST rpc_integrity 00:03:46.154 ************************************ 00:03:46.154 11:05:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:46.154 11:05:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.154 11:05:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.154 11:05:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.414 ************************************ 00:03:46.414 START TEST rpc_plugins 00:03:46.414 ************************************ 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:46.414 { 00:03:46.414 "name": "Malloc1", 00:03:46.414 "aliases": [ 00:03:46.414 "a6b50228-5b6b-4358-b35d-8541196d5a6a" 00:03:46.414 ], 00:03:46.414 "product_name": "Malloc disk", 00:03:46.414 "block_size": 4096, 00:03:46.414 "num_blocks": 256, 00:03:46.414 "uuid": "a6b50228-5b6b-4358-b35d-8541196d5a6a", 00:03:46.414 "assigned_rate_limits": { 00:03:46.414 "rw_ios_per_sec": 0, 00:03:46.414 "rw_mbytes_per_sec": 0, 00:03:46.414 "r_mbytes_per_sec": 0, 00:03:46.414 "w_mbytes_per_sec": 0 00:03:46.414 }, 00:03:46.414 "claimed": false, 00:03:46.414 "zoned": false, 00:03:46.414 "supported_io_types": { 00:03:46.414 "read": true, 00:03:46.414 "write": true, 00:03:46.414 "unmap": true, 00:03:46.414 "flush": true, 00:03:46.414 "reset": true, 00:03:46.414 "nvme_admin": false, 00:03:46.414 "nvme_io": false, 00:03:46.414 "nvme_io_md": false, 00:03:46.414 "write_zeroes": true, 00:03:46.414 "zcopy": true, 00:03:46.414 "get_zone_info": false, 00:03:46.414 "zone_management": false, 00:03:46.414 "zone_append": false, 00:03:46.414 "compare": false, 00:03:46.414 "compare_and_write": false, 00:03:46.414 "abort": true, 00:03:46.414 "seek_hole": false, 00:03:46.414 "seek_data": false, 00:03:46.414 "copy": true, 00:03:46.414 "nvme_iov_md": false 00:03:46.414 }, 00:03:46.414 "memory_domains": [ 00:03:46.414 { 00:03:46.414 "dma_device_id": "system", 00:03:46.414 "dma_device_type": 1 00:03:46.414 }, 00:03:46.414 { 00:03:46.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.414 "dma_device_type": 2 00:03:46.414 } 00:03:46.414 ], 00:03:46.414 "driver_specific": {} 00:03:46.414 } 00:03:46.414 ]' 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:46.414 11:05:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:46.414 00:03:46.414 real 0m0.137s 00:03:46.414 user 0m0.077s 00:03:46.414 sys 0m0.021s 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.414 11:05:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.414 ************************************ 00:03:46.414 END TEST rpc_plugins 00:03:46.414 ************************************ 00:03:46.414 11:05:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:46.414 11:05:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.414 11:05:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.414 11:05:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.414 ************************************ 00:03:46.414 START TEST rpc_trace_cmd_test 00:03:46.414 ************************************ 00:03:46.414 11:05:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:46.414 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:46.414 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:46.414 11:05:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.414 11:05:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:46.414 11:05:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.414 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:46.414 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1508266", 00:03:46.414 "tpoint_group_mask": "0x8", 00:03:46.414 "iscsi_conn": { 00:03:46.414 "mask": "0x2", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "scsi": { 00:03:46.414 "mask": "0x4", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "bdev": { 00:03:46.414 "mask": "0x8", 00:03:46.414 "tpoint_mask": "0xffffffffffffffff" 00:03:46.414 }, 00:03:46.414 "nvmf_rdma": { 00:03:46.414 "mask": "0x10", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "nvmf_tcp": { 00:03:46.414 "mask": "0x20", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "ftl": { 00:03:46.414 "mask": "0x40", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "blobfs": { 00:03:46.414 "mask": "0x80", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "dsa": { 00:03:46.414 "mask": "0x200", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "thread": { 00:03:46.414 "mask": "0x400", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "nvme_pcie": { 00:03:46.414 "mask": "0x800", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "iaa": { 00:03:46.414 "mask": "0x1000", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "nvme_tcp": { 00:03:46.414 "mask": "0x2000", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "bdev_nvme": { 00:03:46.414 "mask": "0x4000", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "sock": { 00:03:46.414 "mask": "0x8000", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "blob": { 00:03:46.414 "mask": "0x10000", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "bdev_raid": { 00:03:46.414 "mask": "0x20000", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 }, 00:03:46.414 "scheduler": { 00:03:46.414 "mask": "0x40000", 00:03:46.414 "tpoint_mask": "0x0" 00:03:46.414 } 00:03:46.414 }' 00:03:46.414 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:46.674 00:03:46.674 real 0m0.207s 00:03:46.674 user 0m0.176s 00:03:46.674 sys 0m0.024s 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.674 11:05:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:46.674 ************************************ 00:03:46.674 END TEST rpc_trace_cmd_test 00:03:46.674 ************************************ 00:03:46.674 11:05:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:46.674 11:05:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:46.674 11:05:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:46.674 11:05:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.674 11:05:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.674 11:05:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.674 ************************************ 00:03:46.674 START TEST rpc_daemon_integrity 00:03:46.674 ************************************ 00:03:46.674 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:46.674 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:46.674 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.674 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.674 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.933 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.933 { 00:03:46.933 "name": "Malloc2", 00:03:46.933 "aliases": [ 00:03:46.933 "e2653a1a-e398-4230-b934-faa3b7277694" 00:03:46.933 ], 00:03:46.933 "product_name": "Malloc disk", 00:03:46.933 "block_size": 512, 00:03:46.933 "num_blocks": 16384, 00:03:46.933 "uuid": "e2653a1a-e398-4230-b934-faa3b7277694", 00:03:46.933 "assigned_rate_limits": { 00:03:46.933 "rw_ios_per_sec": 0, 00:03:46.934 "rw_mbytes_per_sec": 0, 00:03:46.934 "r_mbytes_per_sec": 0, 00:03:46.934 "w_mbytes_per_sec": 0 00:03:46.934 }, 00:03:46.934 "claimed": false, 00:03:46.934 "zoned": false, 00:03:46.934 "supported_io_types": { 00:03:46.934 "read": true, 00:03:46.934 "write": true, 00:03:46.934 "unmap": true, 00:03:46.934 "flush": true, 00:03:46.934 "reset": true, 00:03:46.934 "nvme_admin": false, 00:03:46.934 "nvme_io": false, 00:03:46.934 "nvme_io_md": false, 00:03:46.934 "write_zeroes": true, 00:03:46.934 "zcopy": true, 00:03:46.934 "get_zone_info": false, 00:03:46.934 "zone_management": false, 00:03:46.934 "zone_append": false, 00:03:46.934 "compare": false, 00:03:46.934 "compare_and_write": false, 00:03:46.934 "abort": true, 00:03:46.934 "seek_hole": false, 00:03:46.934 "seek_data": false, 00:03:46.934 "copy": true, 00:03:46.934 "nvme_iov_md": false 00:03:46.934 }, 00:03:46.934 "memory_domains": [ 00:03:46.934 { 00:03:46.934 "dma_device_id": "system", 00:03:46.934 "dma_device_type": 1 00:03:46.934 }, 00:03:46.934 { 00:03:46.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.934 "dma_device_type": 2 00:03:46.934 } 00:03:46.934 ], 00:03:46.934 "driver_specific": {} 00:03:46.934 } 00:03:46.934 ]' 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.934 [2024-12-06 11:05:19.735196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:46.934 [2024-12-06 11:05:19.735222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.934 [2024-12-06 11:05:19.735233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x895ea0 00:03:46.934 [2024-12-06 11:05:19.735238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.934 [2024-12-06 11:05:19.736148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.934 [2024-12-06 11:05:19.736168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.934 Passthru0 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.934 { 00:03:46.934 "name": "Malloc2", 00:03:46.934 "aliases": [ 00:03:46.934 "e2653a1a-e398-4230-b934-faa3b7277694" 00:03:46.934 ], 00:03:46.934 "product_name": "Malloc disk", 00:03:46.934 "block_size": 512, 00:03:46.934 "num_blocks": 16384, 00:03:46.934 "uuid": "e2653a1a-e398-4230-b934-faa3b7277694", 00:03:46.934 "assigned_rate_limits": { 00:03:46.934 "rw_ios_per_sec": 0, 00:03:46.934 "rw_mbytes_per_sec": 0, 00:03:46.934 "r_mbytes_per_sec": 0, 00:03:46.934 "w_mbytes_per_sec": 0 00:03:46.934 }, 00:03:46.934 "claimed": true, 00:03:46.934 "claim_type": "exclusive_write", 00:03:46.934 "zoned": false, 00:03:46.934 "supported_io_types": { 00:03:46.934 "read": true, 00:03:46.934 "write": true, 00:03:46.934 "unmap": true, 00:03:46.934 "flush": true, 00:03:46.934 "reset": true, 00:03:46.934 "nvme_admin": false, 00:03:46.934 "nvme_io": false, 00:03:46.934 "nvme_io_md": false, 00:03:46.934 "write_zeroes": true, 00:03:46.934 "zcopy": true, 00:03:46.934 "get_zone_info": false, 00:03:46.934 "zone_management": false, 00:03:46.934 "zone_append": false, 00:03:46.934 "compare": false, 00:03:46.934 "compare_and_write": false, 00:03:46.934 "abort": true, 00:03:46.934 "seek_hole": false, 00:03:46.934 "seek_data": false, 00:03:46.934 "copy": true, 00:03:46.934 "nvme_iov_md": false 00:03:46.934 }, 00:03:46.934 "memory_domains": [ 00:03:46.934 { 00:03:46.934 "dma_device_id": "system", 00:03:46.934 "dma_device_type": 1 00:03:46.934 }, 00:03:46.934 { 00:03:46.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.934 "dma_device_type": 2 00:03:46.934 } 00:03:46.934 ], 00:03:46.934 "driver_specific": {} 00:03:46.934 }, 00:03:46.934 { 00:03:46.934 "name": "Passthru0", 00:03:46.934 "aliases": [ 00:03:46.934 "49505828-5f76-5c5e-91bd-028130e3b9e2" 00:03:46.934 ], 00:03:46.934 "product_name": "passthru", 00:03:46.934 "block_size": 512, 00:03:46.934 "num_blocks": 16384, 00:03:46.934 "uuid": "49505828-5f76-5c5e-91bd-028130e3b9e2", 00:03:46.934 "assigned_rate_limits": { 00:03:46.934 "rw_ios_per_sec": 0, 00:03:46.934 "rw_mbytes_per_sec": 0, 00:03:46.934 "r_mbytes_per_sec": 0, 00:03:46.934 "w_mbytes_per_sec": 0 00:03:46.934 }, 00:03:46.934 "claimed": false, 00:03:46.934 "zoned": false, 00:03:46.934 "supported_io_types": { 00:03:46.934 "read": true, 00:03:46.934 "write": true, 00:03:46.934 "unmap": true, 00:03:46.934 "flush": true, 00:03:46.934 "reset": true, 00:03:46.934 "nvme_admin": false, 00:03:46.934 "nvme_io": false, 00:03:46.934 "nvme_io_md": false, 00:03:46.934 "write_zeroes": true, 00:03:46.934 "zcopy": true, 00:03:46.934 "get_zone_info": false, 00:03:46.934 "zone_management": false, 00:03:46.934 "zone_append": false, 00:03:46.934 "compare": false, 00:03:46.934 "compare_and_write": false, 00:03:46.934 "abort": true, 00:03:46.934 "seek_hole": false, 00:03:46.934 "seek_data": false, 00:03:46.934 "copy": true, 00:03:46.934 "nvme_iov_md": false 00:03:46.934 }, 00:03:46.934 "memory_domains": [ 00:03:46.934 { 00:03:46.934 "dma_device_id": "system", 00:03:46.934 "dma_device_type": 1 00:03:46.934 }, 00:03:46.934 { 00:03:46.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.934 "dma_device_type": 2 00:03:46.934 } 00:03:46.934 ], 00:03:46.934 "driver_specific": { 00:03:46.934 "passthru": { 00:03:46.934 "name": "Passthru0", 00:03:46.934 "base_bdev_name": "Malloc2" 00:03:46.934 } 00:03:46.934 } 00:03:46.934 } 00:03:46.934 ]' 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.934 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.193 11:05:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.193 00:03:47.193 real 0m0.276s 00:03:47.193 user 0m0.173s 00:03:47.193 sys 0m0.039s 00:03:47.193 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.194 11:05:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.194 ************************************ 00:03:47.194 END TEST rpc_daemon_integrity 00:03:47.194 ************************************ 00:03:47.194 11:05:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:47.194 11:05:19 rpc -- rpc/rpc.sh@84 -- # killprocess 1508266 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@954 -- # '[' -z 1508266 ']' 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@958 -- # kill -0 1508266 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@959 -- # uname 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508266 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508266' 00:03:47.194 killing process with pid 1508266 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@973 -- # kill 1508266 00:03:47.194 11:05:19 rpc -- common/autotest_common.sh@978 -- # wait 1508266 00:03:47.453 00:03:47.453 real 0m2.519s 00:03:47.453 user 0m3.167s 00:03:47.453 sys 0m0.726s 00:03:47.453 11:05:20 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.453 11:05:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.453 ************************************ 00:03:47.453 END TEST rpc 00:03:47.453 ************************************ 00:03:47.453 11:05:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:47.453 11:05:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.453 11:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.453 11:05:20 -- common/autotest_common.sh@10 -- # set +x 00:03:47.453 ************************************ 00:03:47.453 START TEST skip_rpc 00:03:47.453 ************************************ 00:03:47.453 11:05:20 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:47.713 * Looking for test storage... 00:03:47.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.713 11:05:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:47.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.713 --rc genhtml_branch_coverage=1 00:03:47.713 --rc genhtml_function_coverage=1 00:03:47.713 --rc genhtml_legend=1 00:03:47.713 --rc geninfo_all_blocks=1 00:03:47.713 --rc geninfo_unexecuted_blocks=1 00:03:47.713 00:03:47.713 ' 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:47.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.713 --rc genhtml_branch_coverage=1 00:03:47.713 --rc genhtml_function_coverage=1 00:03:47.713 --rc genhtml_legend=1 00:03:47.713 --rc geninfo_all_blocks=1 00:03:47.713 --rc geninfo_unexecuted_blocks=1 00:03:47.713 00:03:47.713 ' 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:47.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.713 --rc genhtml_branch_coverage=1 00:03:47.713 --rc genhtml_function_coverage=1 00:03:47.713 --rc genhtml_legend=1 00:03:47.713 --rc geninfo_all_blocks=1 00:03:47.713 --rc geninfo_unexecuted_blocks=1 00:03:47.713 00:03:47.713 ' 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:47.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.713 --rc genhtml_branch_coverage=1 00:03:47.713 --rc genhtml_function_coverage=1 00:03:47.713 --rc genhtml_legend=1 00:03:47.713 --rc geninfo_all_blocks=1 00:03:47.713 --rc geninfo_unexecuted_blocks=1 00:03:47.713 00:03:47.713 ' 00:03:47.713 11:05:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:47.713 11:05:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.713 11:05:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.713 11:05:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.713 ************************************ 00:03:47.713 START TEST skip_rpc 00:03:47.713 ************************************ 00:03:47.713 11:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:47.713 11:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1508898 00:03:47.713 11:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:47.713 11:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:47.713 11:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:47.713 [2024-12-06 11:05:20.599560] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:03:47.713 [2024-12-06 11:05:20.599597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508898 ] 00:03:47.972 [2024-12-06 11:05:20.669274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.972 [2024-12-06 11:05:20.706804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1508898 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1508898 ']' 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1508898 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508898 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508898' 00:03:53.248 killing process with pid 1508898 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1508898 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1508898 00:03:53.248 00:03:53.248 real 0m5.364s 00:03:53.248 user 0m5.117s 00:03:53.248 sys 0m0.283s 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.248 11:05:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.248 ************************************ 00:03:53.248 END TEST skip_rpc 00:03:53.248 ************************************ 00:03:53.248 11:05:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:53.248 11:05:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.248 11:05:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.248 11:05:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.248 ************************************ 00:03:53.248 START TEST skip_rpc_with_json 00:03:53.248 ************************************ 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1509802 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1509802 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1509802 ']' 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.248 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.248 [2024-12-06 11:05:26.037226] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:03:53.248 [2024-12-06 11:05:26.037269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509802 ] 00:03:53.248 [2024-12-06 11:05:26.110271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.248 [2024-12-06 11:05:26.149518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.185 [2024-12-06 11:05:26.845240] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:54.185 request: 00:03:54.185 { 00:03:54.185 "trtype": "tcp", 00:03:54.185 "method": "nvmf_get_transports", 00:03:54.185 "req_id": 1 00:03:54.185 } 00:03:54.185 Got JSON-RPC error response 00:03:54.185 response: 00:03:54.185 { 00:03:54.185 "code": -19, 00:03:54.185 "message": "No such device" 00:03:54.185 } 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.185 [2024-12-06 11:05:26.857337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.185 11:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.185 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.185 11:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.185 { 00:03:54.185 "subsystems": [ 00:03:54.185 { 00:03:54.185 "subsystem": "fsdev", 00:03:54.185 "config": [ 00:03:54.185 { 00:03:54.185 "method": "fsdev_set_opts", 00:03:54.185 "params": { 00:03:54.185 "fsdev_io_pool_size": 65535, 00:03:54.185 "fsdev_io_cache_size": 256 00:03:54.185 } 00:03:54.185 } 00:03:54.185 ] 00:03:54.185 }, 00:03:54.185 { 00:03:54.185 "subsystem": "vfio_user_target", 00:03:54.185 "config": null 00:03:54.185 }, 00:03:54.185 { 00:03:54.185 "subsystem": "keyring", 00:03:54.185 "config": [] 00:03:54.185 }, 00:03:54.185 { 00:03:54.185 "subsystem": "iobuf", 00:03:54.185 "config": [ 00:03:54.185 { 00:03:54.185 "method": "iobuf_set_options", 00:03:54.185 "params": { 00:03:54.185 "small_pool_count": 8192, 00:03:54.185 "large_pool_count": 1024, 00:03:54.185 "small_bufsize": 8192, 00:03:54.185 "large_bufsize": 135168, 00:03:54.185 "enable_numa": false 00:03:54.185 } 00:03:54.185 } 00:03:54.185 ] 00:03:54.185 }, 00:03:54.185 { 00:03:54.185 "subsystem": "sock", 00:03:54.185 "config": [ 00:03:54.185 { 00:03:54.185 "method": "sock_set_default_impl", 00:03:54.185 "params": { 00:03:54.185 "impl_name": "posix" 00:03:54.185 } 00:03:54.185 }, 00:03:54.185 { 00:03:54.185 "method": "sock_impl_set_options", 00:03:54.185 "params": { 00:03:54.185 "impl_name": "ssl", 00:03:54.185 "recv_buf_size": 4096, 00:03:54.185 "send_buf_size": 4096, 00:03:54.185 "enable_recv_pipe": true, 00:03:54.185 "enable_quickack": false, 00:03:54.185 "enable_placement_id": 0, 00:03:54.185 "enable_zerocopy_send_server": true, 00:03:54.185 "enable_zerocopy_send_client": false, 00:03:54.185 "zerocopy_threshold": 0, 00:03:54.185 "tls_version": 0, 00:03:54.185 "enable_ktls": false 00:03:54.185 } 00:03:54.185 }, 00:03:54.185 { 00:03:54.185 "method": "sock_impl_set_options", 00:03:54.185 "params": { 00:03:54.185 "impl_name": "posix", 00:03:54.185 "recv_buf_size": 2097152, 00:03:54.185 "send_buf_size": 2097152, 00:03:54.185 "enable_recv_pipe": true, 00:03:54.185 "enable_quickack": false, 00:03:54.185 "enable_placement_id": 0, 00:03:54.185 "enable_zerocopy_send_server": true, 00:03:54.185 "enable_zerocopy_send_client": false, 00:03:54.185 "zerocopy_threshold": 0, 00:03:54.185 "tls_version": 0, 00:03:54.185 "enable_ktls": false 00:03:54.185 } 00:03:54.185 } 00:03:54.185 ] 00:03:54.185 }, 00:03:54.185 { 00:03:54.185 "subsystem": "vmd", 00:03:54.185 "config": [] 00:03:54.185 }, 00:03:54.185 { 00:03:54.185 "subsystem": "accel", 00:03:54.185 "config": [ 00:03:54.185 { 00:03:54.185 "method": "accel_set_options", 00:03:54.185 "params": { 00:03:54.185 "small_cache_size": 128, 00:03:54.185 "large_cache_size": 16, 00:03:54.185 "task_count": 2048, 00:03:54.185 "sequence_count": 2048, 00:03:54.185 "buf_count": 2048 00:03:54.185 } 00:03:54.185 } 00:03:54.185 ] 00:03:54.185 }, 00:03:54.185 { 00:03:54.185 "subsystem": "bdev", 00:03:54.185 "config": [ 00:03:54.185 { 00:03:54.185 "method": "bdev_set_options", 00:03:54.185 "params": { 00:03:54.186 "bdev_io_pool_size": 65535, 00:03:54.186 "bdev_io_cache_size": 256, 00:03:54.186 "bdev_auto_examine": true, 00:03:54.186 "iobuf_small_cache_size": 128, 00:03:54.186 "iobuf_large_cache_size": 16 00:03:54.186 } 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "method": "bdev_raid_set_options", 00:03:54.186 "params": { 00:03:54.186 "process_window_size_kb": 1024, 00:03:54.186 "process_max_bandwidth_mb_sec": 0 00:03:54.186 } 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "method": "bdev_iscsi_set_options", 00:03:54.186 "params": { 00:03:54.186 "timeout_sec": 30 00:03:54.186 } 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "method": "bdev_nvme_set_options", 00:03:54.186 "params": { 00:03:54.186 "action_on_timeout": "none", 00:03:54.186 "timeout_us": 0, 00:03:54.186 "timeout_admin_us": 0, 00:03:54.186 "keep_alive_timeout_ms": 10000, 00:03:54.186 "arbitration_burst": 0, 00:03:54.186 "low_priority_weight": 0, 00:03:54.186 "medium_priority_weight": 0, 00:03:54.186 "high_priority_weight": 0, 00:03:54.186 "nvme_adminq_poll_period_us": 10000, 00:03:54.186 "nvme_ioq_poll_period_us": 0, 00:03:54.186 "io_queue_requests": 0, 00:03:54.186 "delay_cmd_submit": true, 00:03:54.186 "transport_retry_count": 4, 00:03:54.186 "bdev_retry_count": 3, 00:03:54.186 "transport_ack_timeout": 0, 00:03:54.186 "ctrlr_loss_timeout_sec": 0, 00:03:54.186 "reconnect_delay_sec": 0, 00:03:54.186 "fast_io_fail_timeout_sec": 0, 00:03:54.186 "disable_auto_failback": false, 00:03:54.186 "generate_uuids": false, 00:03:54.186 "transport_tos": 0, 00:03:54.186 "nvme_error_stat": false, 00:03:54.186 "rdma_srq_size": 0, 00:03:54.186 "io_path_stat": false, 00:03:54.186 "allow_accel_sequence": false, 00:03:54.186 "rdma_max_cq_size": 0, 00:03:54.186 "rdma_cm_event_timeout_ms": 0, 00:03:54.186 "dhchap_digests": [ 00:03:54.186 "sha256", 00:03:54.186 "sha384", 00:03:54.186 "sha512" 00:03:54.186 ], 00:03:54.186 "dhchap_dhgroups": [ 00:03:54.186 "null", 00:03:54.186 "ffdhe2048", 00:03:54.186 "ffdhe3072", 00:03:54.186 "ffdhe4096", 00:03:54.186 "ffdhe6144", 00:03:54.186 "ffdhe8192" 00:03:54.186 ] 00:03:54.186 } 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "method": "bdev_nvme_set_hotplug", 00:03:54.186 "params": { 00:03:54.186 "period_us": 100000, 00:03:54.186 "enable": false 00:03:54.186 } 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "method": "bdev_wait_for_examine" 00:03:54.186 } 00:03:54.186 ] 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "subsystem": "scsi", 00:03:54.186 "config": null 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "subsystem": "scheduler", 00:03:54.186 "config": [ 00:03:54.186 { 00:03:54.186 "method": "framework_set_scheduler", 00:03:54.186 "params": { 00:03:54.186 "name": "static" 00:03:54.186 } 00:03:54.186 } 00:03:54.186 ] 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "subsystem": "vhost_scsi", 00:03:54.186 "config": [] 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "subsystem": "vhost_blk", 00:03:54.186 "config": [] 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "subsystem": "ublk", 00:03:54.186 "config": [] 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "subsystem": "nbd", 00:03:54.186 "config": [] 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "subsystem": "nvmf", 00:03:54.186 "config": [ 00:03:54.186 { 00:03:54.186 "method": "nvmf_set_config", 00:03:54.186 "params": { 00:03:54.186 "discovery_filter": "match_any", 00:03:54.186 "admin_cmd_passthru": { 00:03:54.186 "identify_ctrlr": false 00:03:54.186 }, 00:03:54.186 "dhchap_digests": [ 00:03:54.186 "sha256", 00:03:54.186 "sha384", 00:03:54.186 "sha512" 00:03:54.186 ], 00:03:54.186 "dhchap_dhgroups": [ 00:03:54.186 "null", 00:03:54.186 "ffdhe2048", 00:03:54.186 "ffdhe3072", 00:03:54.186 "ffdhe4096", 00:03:54.186 "ffdhe6144", 00:03:54.186 "ffdhe8192" 00:03:54.186 ] 00:03:54.186 } 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "method": "nvmf_set_max_subsystems", 00:03:54.186 "params": { 00:03:54.186 "max_subsystems": 1024 00:03:54.186 } 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "method": "nvmf_set_crdt", 00:03:54.186 "params": { 00:03:54.186 "crdt1": 0, 00:03:54.186 "crdt2": 0, 00:03:54.186 "crdt3": 0 00:03:54.186 } 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "method": "nvmf_create_transport", 00:03:54.186 "params": { 00:03:54.186 "trtype": "TCP", 00:03:54.186 "max_queue_depth": 128, 00:03:54.186 "max_io_qpairs_per_ctrlr": 127, 00:03:54.186 "in_capsule_data_size": 4096, 00:03:54.186 "max_io_size": 131072, 00:03:54.186 "io_unit_size": 131072, 00:03:54.186 "max_aq_depth": 128, 00:03:54.186 "num_shared_buffers": 511, 00:03:54.186 "buf_cache_size": 4294967295, 00:03:54.186 "dif_insert_or_strip": false, 00:03:54.186 "zcopy": false, 00:03:54.186 "c2h_success": true, 00:03:54.186 "sock_priority": 0, 00:03:54.186 "abort_timeout_sec": 1, 00:03:54.186 "ack_timeout": 0, 00:03:54.186 "data_wr_pool_size": 0 00:03:54.186 } 00:03:54.186 } 00:03:54.186 ] 00:03:54.186 }, 00:03:54.186 { 00:03:54.186 "subsystem": "iscsi", 00:03:54.186 "config": [ 00:03:54.186 { 00:03:54.186 "method": "iscsi_set_options", 00:03:54.186 "params": { 00:03:54.186 "node_base": "iqn.2016-06.io.spdk", 00:03:54.186 "max_sessions": 128, 00:03:54.186 "max_connections_per_session": 2, 00:03:54.186 "max_queue_depth": 64, 00:03:54.186 "default_time2wait": 2, 00:03:54.186 "default_time2retain": 20, 00:03:54.186 "first_burst_length": 8192, 00:03:54.186 "immediate_data": true, 00:03:54.186 "allow_duplicated_isid": false, 00:03:54.186 "error_recovery_level": 0, 00:03:54.186 "nop_timeout": 60, 00:03:54.186 "nop_in_interval": 30, 00:03:54.186 "disable_chap": false, 00:03:54.186 "require_chap": false, 00:03:54.186 "mutual_chap": false, 00:03:54.186 "chap_group": 0, 00:03:54.186 "max_large_datain_per_connection": 64, 00:03:54.186 "max_r2t_per_connection": 4, 00:03:54.186 "pdu_pool_size": 36864, 00:03:54.186 "immediate_data_pool_size": 16384, 00:03:54.186 "data_out_pool_size": 2048 00:03:54.186 } 00:03:54.186 } 00:03:54.186 ] 00:03:54.186 } 00:03:54.186 ] 00:03:54.186 } 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1509802 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1509802 ']' 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1509802 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1509802 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1509802' 00:03:54.186 killing process with pid 1509802 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1509802 00:03:54.186 11:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1509802 00:03:54.445 11:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1510076 00:03:54.445 11:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.445 11:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1510076 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1510076 ']' 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1510076 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1510076 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1510076' 00:03:59.813 killing process with pid 1510076 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1510076 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1510076 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:59.813 00:03:59.813 real 0m6.752s 00:03:59.813 user 0m6.560s 00:03:59.813 sys 0m0.635s 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.813 11:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.813 ************************************ 00:03:59.813 END TEST skip_rpc_with_json 00:03:59.813 ************************************ 00:04:00.073 11:05:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:00.073 11:05:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.073 11:05:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.073 11:05:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.073 ************************************ 00:04:00.073 START TEST skip_rpc_with_delay 00:04:00.073 ************************************ 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:00.073 [2024-12-06 11:05:32.865614] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:00.073 00:04:00.073 real 0m0.069s 00:04:00.073 user 0m0.043s 00:04:00.073 sys 0m0.025s 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.073 11:05:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:00.073 ************************************ 00:04:00.073 END TEST skip_rpc_with_delay 00:04:00.073 ************************************ 00:04:00.073 11:05:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:00.073 11:05:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:00.073 11:05:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:00.073 11:05:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.073 11:05:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.073 11:05:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.073 ************************************ 00:04:00.073 START TEST exit_on_failed_rpc_init 00:04:00.073 ************************************ 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1511181 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1511181 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1511181 ']' 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.073 11:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:00.073 [2024-12-06 11:05:33.004441] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:00.073 [2024-12-06 11:05:33.004483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511181 ] 00:04:00.332 [2024-12-06 11:05:33.078259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.332 [2024-12-06 11:05:33.117353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:00.901 11:05:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.161 [2024-12-06 11:05:33.860225] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:01.161 [2024-12-06 11:05:33.860269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511446 ] 00:04:01.161 [2024-12-06 11:05:33.929974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.161 [2024-12-06 11:05:33.967323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.161 [2024-12-06 11:05:33.967374] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:01.161 [2024-12-06 11:05:33.967382] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:01.161 [2024-12-06 11:05:33.967387] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1511181 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1511181 ']' 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1511181 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1511181 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1511181' 00:04:01.161 killing process with pid 1511181 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1511181 00:04:01.161 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1511181 00:04:01.421 00:04:01.421 real 0m1.400s 00:04:01.421 user 0m1.573s 00:04:01.421 sys 0m0.411s 00:04:01.421 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.421 11:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.421 ************************************ 00:04:01.421 END TEST exit_on_failed_rpc_init 00:04:01.421 ************************************ 00:04:01.681 11:05:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.681 00:04:01.681 real 0m14.053s 00:04:01.681 user 0m13.513s 00:04:01.681 sys 0m1.636s 00:04:01.681 11:05:34 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.681 11:05:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.681 ************************************ 00:04:01.681 END TEST skip_rpc 00:04:01.681 ************************************ 00:04:01.681 11:05:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:01.681 11:05:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.681 11:05:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.681 11:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:01.681 ************************************ 00:04:01.681 START TEST rpc_client 00:04:01.681 ************************************ 00:04:01.681 11:05:34 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:01.681 * Looking for test storage... 00:04:01.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:01.681 11:05:34 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:01.681 11:05:34 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:01.681 11:05:34 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:01.681 11:05:34 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.681 11:05:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.941 11:05:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:01.941 11:05:34 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.941 11:05:34 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:01.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.941 --rc genhtml_branch_coverage=1 00:04:01.941 --rc genhtml_function_coverage=1 00:04:01.941 --rc genhtml_legend=1 00:04:01.941 --rc geninfo_all_blocks=1 00:04:01.941 --rc geninfo_unexecuted_blocks=1 00:04:01.941 00:04:01.941 ' 00:04:01.941 11:05:34 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:01.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.941 --rc genhtml_branch_coverage=1 00:04:01.941 --rc genhtml_function_coverage=1 00:04:01.941 --rc genhtml_legend=1 00:04:01.941 --rc geninfo_all_blocks=1 00:04:01.941 --rc geninfo_unexecuted_blocks=1 00:04:01.941 00:04:01.941 ' 00:04:01.941 11:05:34 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:01.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.941 --rc genhtml_branch_coverage=1 00:04:01.941 --rc genhtml_function_coverage=1 00:04:01.941 --rc genhtml_legend=1 00:04:01.941 --rc geninfo_all_blocks=1 00:04:01.941 --rc geninfo_unexecuted_blocks=1 00:04:01.941 00:04:01.941 ' 00:04:01.941 11:05:34 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:01.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.941 --rc genhtml_branch_coverage=1 00:04:01.941 --rc genhtml_function_coverage=1 00:04:01.941 --rc genhtml_legend=1 00:04:01.941 --rc geninfo_all_blocks=1 00:04:01.941 --rc geninfo_unexecuted_blocks=1 00:04:01.941 00:04:01.941 ' 00:04:01.941 11:05:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:01.941 OK 00:04:01.941 11:05:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:01.941 00:04:01.941 real 0m0.196s 00:04:01.941 user 0m0.115s 00:04:01.941 sys 0m0.093s 00:04:01.941 11:05:34 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.941 11:05:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:01.941 ************************************ 00:04:01.941 END TEST rpc_client 00:04:01.941 ************************************ 00:04:01.941 11:05:34 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:01.941 11:05:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.941 11:05:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.941 11:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:01.941 ************************************ 00:04:01.941 START TEST json_config 00:04:01.941 ************************************ 00:04:01.941 11:05:34 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:01.941 11:05:34 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:01.941 11:05:34 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:01.941 11:05:34 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:01.941 11:05:34 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:01.941 11:05:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.941 11:05:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.941 11:05:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.941 11:05:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.941 11:05:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.941 11:05:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.941 11:05:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.941 11:05:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.941 11:05:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.941 11:05:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.941 11:05:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.941 11:05:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:01.941 11:05:34 json_config -- scripts/common.sh@345 -- # : 1 00:04:01.941 11:05:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.941 11:05:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.941 11:05:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:01.941 11:05:34 json_config -- scripts/common.sh@353 -- # local d=1 00:04:01.941 11:05:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.941 11:05:34 json_config -- scripts/common.sh@355 -- # echo 1 00:04:01.941 11:05:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.941 11:05:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:01.941 11:05:34 json_config -- scripts/common.sh@353 -- # local d=2 00:04:01.941 11:05:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.941 11:05:34 json_config -- scripts/common.sh@355 -- # echo 2 00:04:01.941 11:05:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.941 11:05:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.941 11:05:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.941 11:05:34 json_config -- scripts/common.sh@368 -- # return 0 00:04:01.941 11:05:34 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.941 11:05:34 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:01.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.941 --rc genhtml_branch_coverage=1 00:04:01.941 --rc genhtml_function_coverage=1 00:04:01.941 --rc genhtml_legend=1 00:04:01.941 --rc geninfo_all_blocks=1 00:04:01.941 --rc geninfo_unexecuted_blocks=1 00:04:01.941 00:04:01.941 ' 00:04:01.941 11:05:34 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:01.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.941 --rc genhtml_branch_coverage=1 00:04:01.941 --rc genhtml_function_coverage=1 00:04:01.941 --rc genhtml_legend=1 00:04:01.941 --rc geninfo_all_blocks=1 00:04:01.941 --rc geninfo_unexecuted_blocks=1 00:04:01.941 00:04:01.941 ' 00:04:01.942 11:05:34 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.942 --rc genhtml_branch_coverage=1 00:04:01.942 --rc genhtml_function_coverage=1 00:04:01.942 --rc genhtml_legend=1 00:04:01.942 --rc geninfo_all_blocks=1 00:04:01.942 --rc geninfo_unexecuted_blocks=1 00:04:01.942 00:04:01.942 ' 00:04:01.942 11:05:34 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.942 --rc genhtml_branch_coverage=1 00:04:01.942 --rc genhtml_function_coverage=1 00:04:01.942 --rc genhtml_legend=1 00:04:01.942 --rc geninfo_all_blocks=1 00:04:01.942 --rc geninfo_unexecuted_blocks=1 00:04:01.942 00:04:01.942 ' 00:04:01.942 11:05:34 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:01.942 11:05:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.201 11:05:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:02.201 11:05:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.201 11:05:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.201 11:05:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.201 11:05:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.201 11:05:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.201 11:05:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.201 11:05:34 json_config -- paths/export.sh@5 -- # export PATH 00:04:02.201 11:05:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@51 -- # : 0 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:02.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:02.201 11:05:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:02.201 11:05:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:02.201 11:05:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:02.201 11:05:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:02.201 11:05:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:02.201 11:05:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:02.201 11:05:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:02.201 11:05:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:02.201 11:05:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:02.202 INFO: JSON configuration test init 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.202 11:05:34 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:02.202 11:05:34 json_config -- json_config/common.sh@9 -- # local app=target 00:04:02.202 11:05:34 json_config -- json_config/common.sh@10 -- # shift 00:04:02.202 11:05:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:02.202 11:05:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:02.202 11:05:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:02.202 11:05:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.202 11:05:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.202 11:05:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1511721 00:04:02.202 11:05:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:02.202 Waiting for target to run... 00:04:02.202 11:05:34 json_config -- json_config/common.sh@25 -- # waitforlisten 1511721 /var/tmp/spdk_tgt.sock 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@835 -- # '[' -z 1511721 ']' 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.202 11:05:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.202 11:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.202 [2024-12-06 11:05:34.970416] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:02.202 [2024-12-06 11:05:34.970461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511721 ] 00:04:02.461 [2024-12-06 11:05:35.397019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.719 [2024-12-06 11:05:35.455742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.976 11:05:35 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.976 11:05:35 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:02.976 11:05:35 json_config -- json_config/common.sh@26 -- # echo '' 00:04:02.976 00:04:02.976 11:05:35 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:02.976 11:05:35 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:02.976 11:05:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.977 11:05:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.977 11:05:35 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:02.977 11:05:35 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:02.977 11:05:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.977 11:05:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.977 11:05:35 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:02.977 11:05:35 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:02.977 11:05:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:06.260 11:05:38 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:06.260 11:05:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:06.260 11:05:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.260 11:05:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.260 11:05:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:06.260 11:05:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:06.260 11:05:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:06.260 11:05:38 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:06.260 11:05:38 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:06.260 11:05:38 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:06.260 11:05:38 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:06.260 11:05:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@54 -- # sort 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:06.260 11:05:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.260 11:05:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:06.260 11:05:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.260 11:05:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:06.260 11:05:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.260 11:05:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.519 MallocForNvmf0 00:04:06.519 11:05:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:06.519 11:05:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:06.778 MallocForNvmf1 00:04:06.778 11:05:39 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:06.778 11:05:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:06.778 [2024-12-06 11:05:39.676739] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:06.778 11:05:39 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:06.778 11:05:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.037 11:05:39 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.037 11:05:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.296 11:05:40 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.296 11:05:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.296 11:05:40 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.296 11:05:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.554 [2024-12-06 11:05:40.346891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:07.554 11:05:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:07.554 11:05:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.554 11:05:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.554 11:05:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:07.554 11:05:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.554 11:05:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.554 11:05:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:07.554 11:05:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:07.554 11:05:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:07.814 MallocBdevForConfigChangeCheck 00:04:07.814 11:05:40 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:07.814 11:05:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.814 11:05:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.814 11:05:40 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:07.814 11:05:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.073 11:05:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:08.073 INFO: shutting down applications... 00:04:08.073 11:05:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:08.073 11:05:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:08.073 11:05:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:08.073 11:05:40 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:09.979 Calling clear_iscsi_subsystem 00:04:09.979 Calling clear_nvmf_subsystem 00:04:09.979 Calling clear_nbd_subsystem 00:04:09.979 Calling clear_ublk_subsystem 00:04:09.979 Calling clear_vhost_blk_subsystem 00:04:09.979 Calling clear_vhost_scsi_subsystem 00:04:09.979 Calling clear_bdev_subsystem 00:04:09.979 11:05:42 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:09.979 11:05:42 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:09.979 11:05:42 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:09.979 11:05:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.979 11:05:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:09.979 11:05:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:10.239 11:05:42 json_config -- json_config/json_config.sh@352 -- # break 00:04:10.239 11:05:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:10.239 11:05:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:10.239 11:05:42 json_config -- json_config/common.sh@31 -- # local app=target 00:04:10.239 11:05:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:10.239 11:05:42 json_config -- json_config/common.sh@35 -- # [[ -n 1511721 ]] 00:04:10.239 11:05:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1511721 00:04:10.239 11:05:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:10.239 11:05:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.239 11:05:42 json_config -- json_config/common.sh@41 -- # kill -0 1511721 00:04:10.239 11:05:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.809 11:05:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.809 11:05:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.809 11:05:43 json_config -- json_config/common.sh@41 -- # kill -0 1511721 00:04:10.809 11:05:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:10.809 11:05:43 json_config -- json_config/common.sh@43 -- # break 00:04:10.809 11:05:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:10.809 11:05:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:10.809 SPDK target shutdown done 00:04:10.809 11:05:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:10.809 INFO: relaunching applications... 00:04:10.809 11:05:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.809 11:05:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:10.809 11:05:43 json_config -- json_config/common.sh@10 -- # shift 00:04:10.809 11:05:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.809 11:05:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.809 11:05:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.809 11:05:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.809 11:05:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.809 11:05:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1513352 00:04:10.809 11:05:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.809 Waiting for target to run... 00:04:10.809 11:05:43 json_config -- json_config/common.sh@25 -- # waitforlisten 1513352 /var/tmp/spdk_tgt.sock 00:04:10.809 11:05:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.809 11:05:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 1513352 ']' 00:04:10.809 11:05:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.809 11:05:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.809 11:05:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.809 11:05:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.809 11:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.809 [2024-12-06 11:05:43.525626] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:10.809 [2024-12-06 11:05:43.525685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513352 ] 00:04:11.068 [2024-12-06 11:05:43.819409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.068 [2024-12-06 11:05:43.851975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.359 [2024-12-06 11:05:46.890369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.359 [2024-12-06 11:05:46.922742] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:14.359 11:05:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.359 11:05:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:14.359 11:05:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:14.359 00:04:14.359 11:05:46 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:14.359 11:05:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:14.359 INFO: Checking if target configuration is the same... 00:04:14.359 11:05:46 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.359 11:05:46 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:14.359 11:05:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:14.359 + '[' 2 -ne 2 ']' 00:04:14.359 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:14.359 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:14.359 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:14.359 +++ basename /dev/fd/62 00:04:14.359 ++ mktemp /tmp/62.XXX 00:04:14.359 + tmp_file_1=/tmp/62.SuU 00:04:14.359 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.359 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:14.359 + tmp_file_2=/tmp/spdk_tgt_config.json.4mW 00:04:14.359 + ret=0 00:04:14.359 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:14.359 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:14.618 + diff -u /tmp/62.SuU /tmp/spdk_tgt_config.json.4mW 00:04:14.618 + echo 'INFO: JSON config files are the same' 00:04:14.618 INFO: JSON config files are the same 00:04:14.618 + rm /tmp/62.SuU /tmp/spdk_tgt_config.json.4mW 00:04:14.618 + exit 0 00:04:14.618 11:05:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:14.618 11:05:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:14.618 INFO: changing configuration and checking if this can be detected... 00:04:14.618 11:05:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:14.618 11:05:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:14.618 11:05:47 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.618 11:05:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:14.618 11:05:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:14.618 + '[' 2 -ne 2 ']' 00:04:14.618 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:14.618 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:14.618 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:14.618 +++ basename /dev/fd/62 00:04:14.618 ++ mktemp /tmp/62.XXX 00:04:14.618 + tmp_file_1=/tmp/62.hq3 00:04:14.618 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.618 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:14.618 + tmp_file_2=/tmp/spdk_tgt_config.json.7fm 00:04:14.618 + ret=0 00:04:14.618 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.186 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.186 + diff -u /tmp/62.hq3 /tmp/spdk_tgt_config.json.7fm 00:04:15.186 + ret=1 00:04:15.186 + echo '=== Start of file: /tmp/62.hq3 ===' 00:04:15.186 + cat /tmp/62.hq3 00:04:15.186 + echo '=== End of file: /tmp/62.hq3 ===' 00:04:15.186 + echo '' 00:04:15.186 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7fm ===' 00:04:15.186 + cat /tmp/spdk_tgt_config.json.7fm 00:04:15.186 + echo '=== End of file: /tmp/spdk_tgt_config.json.7fm ===' 00:04:15.186 + echo '' 00:04:15.186 + rm /tmp/62.hq3 /tmp/spdk_tgt_config.json.7fm 00:04:15.186 + exit 1 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:15.186 INFO: configuration change detected. 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@324 -- # [[ -n 1513352 ]] 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.186 11:05:47 json_config -- json_config/json_config.sh@330 -- # killprocess 1513352 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@954 -- # '[' -z 1513352 ']' 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@958 -- # kill -0 1513352 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@959 -- # uname 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1513352 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1513352' 00:04:15.186 killing process with pid 1513352 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@973 -- # kill 1513352 00:04:15.186 11:05:47 json_config -- common/autotest_common.sh@978 -- # wait 1513352 00:04:17.092 11:05:49 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.092 11:05:49 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:17.092 11:05:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.092 11:05:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.092 11:05:49 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:17.092 11:05:49 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:17.092 INFO: Success 00:04:17.092 00:04:17.092 real 0m14.854s 00:04:17.092 user 0m15.036s 00:04:17.092 sys 0m2.451s 00:04:17.092 11:05:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.092 11:05:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.092 ************************************ 00:04:17.092 END TEST json_config 00:04:17.092 ************************************ 00:04:17.092 11:05:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:17.092 11:05:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.092 11:05:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.092 11:05:49 -- common/autotest_common.sh@10 -- # set +x 00:04:17.092 ************************************ 00:04:17.092 START TEST json_config_extra_key 00:04:17.092 ************************************ 00:04:17.092 11:05:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:17.092 11:05:49 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:17.092 11:05:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:17.092 11:05:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.092 11:05:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:17.092 11:05:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.093 --rc genhtml_branch_coverage=1 00:04:17.093 --rc genhtml_function_coverage=1 00:04:17.093 --rc genhtml_legend=1 00:04:17.093 --rc geninfo_all_blocks=1 00:04:17.093 --rc geninfo_unexecuted_blocks=1 00:04:17.093 00:04:17.093 ' 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.093 --rc genhtml_branch_coverage=1 00:04:17.093 --rc genhtml_function_coverage=1 00:04:17.093 --rc genhtml_legend=1 00:04:17.093 --rc geninfo_all_blocks=1 00:04:17.093 --rc geninfo_unexecuted_blocks=1 00:04:17.093 00:04:17.093 ' 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.093 --rc genhtml_branch_coverage=1 00:04:17.093 --rc genhtml_function_coverage=1 00:04:17.093 --rc genhtml_legend=1 00:04:17.093 --rc geninfo_all_blocks=1 00:04:17.093 --rc geninfo_unexecuted_blocks=1 00:04:17.093 00:04:17.093 ' 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.093 --rc genhtml_branch_coverage=1 00:04:17.093 --rc genhtml_function_coverage=1 00:04:17.093 --rc genhtml_legend=1 00:04:17.093 --rc geninfo_all_blocks=1 00:04:17.093 --rc geninfo_unexecuted_blocks=1 00:04:17.093 00:04:17.093 ' 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:17.093 11:05:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:17.093 11:05:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.093 11:05:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.093 11:05:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.093 11:05:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:17.093 11:05:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:17.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:17.093 11:05:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:17.093 INFO: launching applications... 00:04:17.093 11:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1514724 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:17.093 Waiting for target to run... 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1514724 /var/tmp/spdk_tgt.sock 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1514724 ']' 00:04:17.093 11:05:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:17.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.093 11:05:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:17.093 [2024-12-06 11:05:49.879220] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:17.093 [2024-12-06 11:05:49.879265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514724 ] 00:04:17.662 [2024-12-06 11:05:50.313307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.662 [2024-12-06 11:05:50.371524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.921 11:05:50 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.921 11:05:50 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:17.921 11:05:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:17.921 00:04:17.921 11:05:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:17.921 INFO: shutting down applications... 00:04:17.921 11:05:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:17.921 11:05:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:17.921 11:05:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:17.921 11:05:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1514724 ]] 00:04:17.921 11:05:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1514724 00:04:17.921 11:05:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:17.921 11:05:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:17.921 11:05:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1514724 00:04:17.921 11:05:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:18.490 11:05:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:18.490 11:05:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:18.490 11:05:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1514724 00:04:18.490 11:05:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:18.490 11:05:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:18.490 11:05:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:18.490 11:05:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:18.490 SPDK target shutdown done 00:04:18.490 11:05:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:18.490 Success 00:04:18.490 00:04:18.490 real 0m1.550s 00:04:18.490 user 0m1.152s 00:04:18.490 sys 0m0.557s 00:04:18.490 11:05:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.490 11:05:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:18.490 ************************************ 00:04:18.490 END TEST json_config_extra_key 00:04:18.490 ************************************ 00:04:18.490 11:05:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:18.490 11:05:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.490 11:05:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.490 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:04:18.490 ************************************ 00:04:18.490 START TEST alias_rpc 00:04:18.490 ************************************ 00:04:18.490 11:05:51 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:18.490 * Looking for test storage... 00:04:18.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:18.490 11:05:51 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:18.490 11:05:51 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:18.490 11:05:51 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:18.490 11:05:51 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.490 11:05:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.749 11:05:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:18.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.749 --rc genhtml_branch_coverage=1 00:04:18.749 --rc genhtml_function_coverage=1 00:04:18.749 --rc genhtml_legend=1 00:04:18.749 --rc geninfo_all_blocks=1 00:04:18.749 --rc geninfo_unexecuted_blocks=1 00:04:18.749 00:04:18.749 ' 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:18.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.749 --rc genhtml_branch_coverage=1 00:04:18.749 --rc genhtml_function_coverage=1 00:04:18.749 --rc genhtml_legend=1 00:04:18.749 --rc geninfo_all_blocks=1 00:04:18.749 --rc geninfo_unexecuted_blocks=1 00:04:18.749 00:04:18.749 ' 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:18.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.749 --rc genhtml_branch_coverage=1 00:04:18.749 --rc genhtml_function_coverage=1 00:04:18.749 --rc genhtml_legend=1 00:04:18.749 --rc geninfo_all_blocks=1 00:04:18.749 --rc geninfo_unexecuted_blocks=1 00:04:18.749 00:04:18.749 ' 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:18.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.749 --rc genhtml_branch_coverage=1 00:04:18.749 --rc genhtml_function_coverage=1 00:04:18.749 --rc genhtml_legend=1 00:04:18.749 --rc geninfo_all_blocks=1 00:04:18.749 --rc geninfo_unexecuted_blocks=1 00:04:18.749 00:04:18.749 ' 00:04:18.749 11:05:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:18.749 11:05:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1515041 00:04:18.749 11:05:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1515041 00:04:18.749 11:05:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1515041 ']' 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.749 11:05:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.749 [2024-12-06 11:05:51.489684] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:18.749 [2024-12-06 11:05:51.489731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515041 ] 00:04:18.749 [2024-12-06 11:05:51.558490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.749 [2024-12-06 11:05:51.597809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:19.687 11:05:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:19.687 11:05:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1515041 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1515041 ']' 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1515041 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1515041 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1515041' 00:04:19.687 killing process with pid 1515041 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@973 -- # kill 1515041 00:04:19.687 11:05:52 alias_rpc -- common/autotest_common.sh@978 -- # wait 1515041 00:04:19.946 00:04:19.946 real 0m1.581s 00:04:19.946 user 0m1.719s 00:04:19.946 sys 0m0.429s 00:04:19.946 11:05:52 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.946 11:05:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.946 ************************************ 00:04:19.946 END TEST alias_rpc 00:04:19.946 ************************************ 00:04:19.946 11:05:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:19.946 11:05:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:19.946 11:05:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.946 11:05:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.946 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:04:20.205 ************************************ 00:04:20.205 START TEST spdkcli_tcp 00:04:20.205 ************************************ 00:04:20.205 11:05:52 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:20.205 * Looking for test storage... 00:04:20.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:20.205 11:05:53 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:20.205 11:05:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:20.205 11:05:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:20.205 11:05:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.205 11:05:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:20.205 11:05:53 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.205 11:05:53 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:20.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.205 --rc genhtml_branch_coverage=1 00:04:20.206 --rc genhtml_function_coverage=1 00:04:20.206 --rc genhtml_legend=1 00:04:20.206 --rc geninfo_all_blocks=1 00:04:20.206 --rc geninfo_unexecuted_blocks=1 00:04:20.206 00:04:20.206 ' 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:20.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.206 --rc genhtml_branch_coverage=1 00:04:20.206 --rc genhtml_function_coverage=1 00:04:20.206 --rc genhtml_legend=1 00:04:20.206 --rc geninfo_all_blocks=1 00:04:20.206 --rc geninfo_unexecuted_blocks=1 00:04:20.206 00:04:20.206 ' 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:20.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.206 --rc genhtml_branch_coverage=1 00:04:20.206 --rc genhtml_function_coverage=1 00:04:20.206 --rc genhtml_legend=1 00:04:20.206 --rc geninfo_all_blocks=1 00:04:20.206 --rc geninfo_unexecuted_blocks=1 00:04:20.206 00:04:20.206 ' 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:20.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.206 --rc genhtml_branch_coverage=1 00:04:20.206 --rc genhtml_function_coverage=1 00:04:20.206 --rc genhtml_legend=1 00:04:20.206 --rc geninfo_all_blocks=1 00:04:20.206 --rc geninfo_unexecuted_blocks=1 00:04:20.206 00:04:20.206 ' 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1515371 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1515371 00:04:20.206 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1515371 ']' 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.206 11:05:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:20.465 [2024-12-06 11:05:53.144414] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:20.465 [2024-12-06 11:05:53.144461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515371 ] 00:04:20.465 [2024-12-06 11:05:53.214681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.465 [2024-12-06 11:05:53.254651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.465 [2024-12-06 11:05:53.254653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.034 11:05:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.034 11:05:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:21.034 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1515636 00:04:21.034 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:21.034 11:05:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:21.293 [ 00:04:21.293 "bdev_malloc_delete", 00:04:21.293 "bdev_malloc_create", 00:04:21.293 "bdev_null_resize", 00:04:21.293 "bdev_null_delete", 00:04:21.293 "bdev_null_create", 00:04:21.293 "bdev_nvme_cuse_unregister", 00:04:21.293 "bdev_nvme_cuse_register", 00:04:21.293 "bdev_opal_new_user", 00:04:21.293 "bdev_opal_set_lock_state", 00:04:21.293 "bdev_opal_delete", 00:04:21.293 "bdev_opal_get_info", 00:04:21.293 "bdev_opal_create", 00:04:21.293 "bdev_nvme_opal_revert", 00:04:21.293 "bdev_nvme_opal_init", 00:04:21.293 "bdev_nvme_send_cmd", 00:04:21.293 "bdev_nvme_set_keys", 00:04:21.293 "bdev_nvme_get_path_iostat", 00:04:21.293 "bdev_nvme_get_mdns_discovery_info", 00:04:21.293 "bdev_nvme_stop_mdns_discovery", 00:04:21.293 "bdev_nvme_start_mdns_discovery", 00:04:21.293 "bdev_nvme_set_multipath_policy", 00:04:21.293 "bdev_nvme_set_preferred_path", 00:04:21.293 "bdev_nvme_get_io_paths", 00:04:21.293 "bdev_nvme_remove_error_injection", 00:04:21.293 "bdev_nvme_add_error_injection", 00:04:21.293 "bdev_nvme_get_discovery_info", 00:04:21.293 "bdev_nvme_stop_discovery", 00:04:21.293 "bdev_nvme_start_discovery", 00:04:21.293 "bdev_nvme_get_controller_health_info", 00:04:21.293 "bdev_nvme_disable_controller", 00:04:21.293 "bdev_nvme_enable_controller", 00:04:21.293 "bdev_nvme_reset_controller", 00:04:21.293 "bdev_nvme_get_transport_statistics", 00:04:21.293 "bdev_nvme_apply_firmware", 00:04:21.293 "bdev_nvme_detach_controller", 00:04:21.293 "bdev_nvme_get_controllers", 00:04:21.293 "bdev_nvme_attach_controller", 00:04:21.293 "bdev_nvme_set_hotplug", 00:04:21.293 "bdev_nvme_set_options", 00:04:21.293 "bdev_passthru_delete", 00:04:21.293 "bdev_passthru_create", 00:04:21.293 "bdev_lvol_set_parent_bdev", 00:04:21.293 "bdev_lvol_set_parent", 00:04:21.293 "bdev_lvol_check_shallow_copy", 00:04:21.293 "bdev_lvol_start_shallow_copy", 00:04:21.293 "bdev_lvol_grow_lvstore", 00:04:21.293 "bdev_lvol_get_lvols", 00:04:21.293 "bdev_lvol_get_lvstores", 00:04:21.293 "bdev_lvol_delete", 00:04:21.293 "bdev_lvol_set_read_only", 00:04:21.293 "bdev_lvol_resize", 00:04:21.293 "bdev_lvol_decouple_parent", 00:04:21.293 "bdev_lvol_inflate", 00:04:21.293 "bdev_lvol_rename", 00:04:21.293 "bdev_lvol_clone_bdev", 00:04:21.293 "bdev_lvol_clone", 00:04:21.293 "bdev_lvol_snapshot", 00:04:21.293 "bdev_lvol_create", 00:04:21.293 "bdev_lvol_delete_lvstore", 00:04:21.293 "bdev_lvol_rename_lvstore", 00:04:21.293 "bdev_lvol_create_lvstore", 00:04:21.293 "bdev_raid_set_options", 00:04:21.293 "bdev_raid_remove_base_bdev", 00:04:21.293 "bdev_raid_add_base_bdev", 00:04:21.293 "bdev_raid_delete", 00:04:21.293 "bdev_raid_create", 00:04:21.293 "bdev_raid_get_bdevs", 00:04:21.293 "bdev_error_inject_error", 00:04:21.293 "bdev_error_delete", 00:04:21.293 "bdev_error_create", 00:04:21.293 "bdev_split_delete", 00:04:21.293 "bdev_split_create", 00:04:21.294 "bdev_delay_delete", 00:04:21.294 "bdev_delay_create", 00:04:21.294 "bdev_delay_update_latency", 00:04:21.294 "bdev_zone_block_delete", 00:04:21.294 "bdev_zone_block_create", 00:04:21.294 "blobfs_create", 00:04:21.294 "blobfs_detect", 00:04:21.294 "blobfs_set_cache_size", 00:04:21.294 "bdev_aio_delete", 00:04:21.294 "bdev_aio_rescan", 00:04:21.294 "bdev_aio_create", 00:04:21.294 "bdev_ftl_set_property", 00:04:21.294 "bdev_ftl_get_properties", 00:04:21.294 "bdev_ftl_get_stats", 00:04:21.294 "bdev_ftl_unmap", 00:04:21.294 "bdev_ftl_unload", 00:04:21.294 "bdev_ftl_delete", 00:04:21.294 "bdev_ftl_load", 00:04:21.294 "bdev_ftl_create", 00:04:21.294 "bdev_virtio_attach_controller", 00:04:21.294 "bdev_virtio_scsi_get_devices", 00:04:21.294 "bdev_virtio_detach_controller", 00:04:21.294 "bdev_virtio_blk_set_hotplug", 00:04:21.294 "bdev_iscsi_delete", 00:04:21.294 "bdev_iscsi_create", 00:04:21.294 "bdev_iscsi_set_options", 00:04:21.294 "accel_error_inject_error", 00:04:21.294 "ioat_scan_accel_module", 00:04:21.294 "dsa_scan_accel_module", 00:04:21.294 "iaa_scan_accel_module", 00:04:21.294 "vfu_virtio_create_fs_endpoint", 00:04:21.294 "vfu_virtio_create_scsi_endpoint", 00:04:21.294 "vfu_virtio_scsi_remove_target", 00:04:21.294 "vfu_virtio_scsi_add_target", 00:04:21.294 "vfu_virtio_create_blk_endpoint", 00:04:21.294 "vfu_virtio_delete_endpoint", 00:04:21.294 "keyring_file_remove_key", 00:04:21.294 "keyring_file_add_key", 00:04:21.294 "keyring_linux_set_options", 00:04:21.294 "fsdev_aio_delete", 00:04:21.294 "fsdev_aio_create", 00:04:21.294 "iscsi_get_histogram", 00:04:21.294 "iscsi_enable_histogram", 00:04:21.294 "iscsi_set_options", 00:04:21.294 "iscsi_get_auth_groups", 00:04:21.294 "iscsi_auth_group_remove_secret", 00:04:21.294 "iscsi_auth_group_add_secret", 00:04:21.294 "iscsi_delete_auth_group", 00:04:21.294 "iscsi_create_auth_group", 00:04:21.294 "iscsi_set_discovery_auth", 00:04:21.294 "iscsi_get_options", 00:04:21.294 "iscsi_target_node_request_logout", 00:04:21.294 "iscsi_target_node_set_redirect", 00:04:21.294 "iscsi_target_node_set_auth", 00:04:21.294 "iscsi_target_node_add_lun", 00:04:21.294 "iscsi_get_stats", 00:04:21.294 "iscsi_get_connections", 00:04:21.294 "iscsi_portal_group_set_auth", 00:04:21.294 "iscsi_start_portal_group", 00:04:21.294 "iscsi_delete_portal_group", 00:04:21.294 "iscsi_create_portal_group", 00:04:21.294 "iscsi_get_portal_groups", 00:04:21.294 "iscsi_delete_target_node", 00:04:21.294 "iscsi_target_node_remove_pg_ig_maps", 00:04:21.294 "iscsi_target_node_add_pg_ig_maps", 00:04:21.294 "iscsi_create_target_node", 00:04:21.294 "iscsi_get_target_nodes", 00:04:21.294 "iscsi_delete_initiator_group", 00:04:21.294 "iscsi_initiator_group_remove_initiators", 00:04:21.294 "iscsi_initiator_group_add_initiators", 00:04:21.294 "iscsi_create_initiator_group", 00:04:21.294 "iscsi_get_initiator_groups", 00:04:21.294 "nvmf_set_crdt", 00:04:21.294 "nvmf_set_config", 00:04:21.294 "nvmf_set_max_subsystems", 00:04:21.294 "nvmf_stop_mdns_prr", 00:04:21.294 "nvmf_publish_mdns_prr", 00:04:21.294 "nvmf_subsystem_get_listeners", 00:04:21.294 "nvmf_subsystem_get_qpairs", 00:04:21.294 "nvmf_subsystem_get_controllers", 00:04:21.294 "nvmf_get_stats", 00:04:21.294 "nvmf_get_transports", 00:04:21.294 "nvmf_create_transport", 00:04:21.294 "nvmf_get_targets", 00:04:21.294 "nvmf_delete_target", 00:04:21.294 "nvmf_create_target", 00:04:21.294 "nvmf_subsystem_allow_any_host", 00:04:21.294 "nvmf_subsystem_set_keys", 00:04:21.294 "nvmf_subsystem_remove_host", 00:04:21.294 "nvmf_subsystem_add_host", 00:04:21.294 "nvmf_ns_remove_host", 00:04:21.294 "nvmf_ns_add_host", 00:04:21.294 "nvmf_subsystem_remove_ns", 00:04:21.294 "nvmf_subsystem_set_ns_ana_group", 00:04:21.294 "nvmf_subsystem_add_ns", 00:04:21.294 "nvmf_subsystem_listener_set_ana_state", 00:04:21.294 "nvmf_discovery_get_referrals", 00:04:21.294 "nvmf_discovery_remove_referral", 00:04:21.294 "nvmf_discovery_add_referral", 00:04:21.294 "nvmf_subsystem_remove_listener", 00:04:21.294 "nvmf_subsystem_add_listener", 00:04:21.294 "nvmf_delete_subsystem", 00:04:21.294 "nvmf_create_subsystem", 00:04:21.294 "nvmf_get_subsystems", 00:04:21.294 "env_dpdk_get_mem_stats", 00:04:21.294 "nbd_get_disks", 00:04:21.294 "nbd_stop_disk", 00:04:21.294 "nbd_start_disk", 00:04:21.294 "ublk_recover_disk", 00:04:21.294 "ublk_get_disks", 00:04:21.294 "ublk_stop_disk", 00:04:21.294 "ublk_start_disk", 00:04:21.294 "ublk_destroy_target", 00:04:21.294 "ublk_create_target", 00:04:21.294 "virtio_blk_create_transport", 00:04:21.294 "virtio_blk_get_transports", 00:04:21.294 "vhost_controller_set_coalescing", 00:04:21.294 "vhost_get_controllers", 00:04:21.294 "vhost_delete_controller", 00:04:21.294 "vhost_create_blk_controller", 00:04:21.294 "vhost_scsi_controller_remove_target", 00:04:21.294 "vhost_scsi_controller_add_target", 00:04:21.294 "vhost_start_scsi_controller", 00:04:21.294 "vhost_create_scsi_controller", 00:04:21.294 "thread_set_cpumask", 00:04:21.294 "scheduler_set_options", 00:04:21.294 "framework_get_governor", 00:04:21.294 "framework_get_scheduler", 00:04:21.294 "framework_set_scheduler", 00:04:21.294 "framework_get_reactors", 00:04:21.294 "thread_get_io_channels", 00:04:21.294 "thread_get_pollers", 00:04:21.294 "thread_get_stats", 00:04:21.294 "framework_monitor_context_switch", 00:04:21.294 "spdk_kill_instance", 00:04:21.294 "log_enable_timestamps", 00:04:21.294 "log_get_flags", 00:04:21.294 "log_clear_flag", 00:04:21.294 "log_set_flag", 00:04:21.294 "log_get_level", 00:04:21.294 "log_set_level", 00:04:21.294 "log_get_print_level", 00:04:21.294 "log_set_print_level", 00:04:21.294 "framework_enable_cpumask_locks", 00:04:21.294 "framework_disable_cpumask_locks", 00:04:21.294 "framework_wait_init", 00:04:21.294 "framework_start_init", 00:04:21.294 "scsi_get_devices", 00:04:21.294 "bdev_get_histogram", 00:04:21.294 "bdev_enable_histogram", 00:04:21.294 "bdev_set_qos_limit", 00:04:21.294 "bdev_set_qd_sampling_period", 00:04:21.294 "bdev_get_bdevs", 00:04:21.294 "bdev_reset_iostat", 00:04:21.294 "bdev_get_iostat", 00:04:21.294 "bdev_examine", 00:04:21.294 "bdev_wait_for_examine", 00:04:21.294 "bdev_set_options", 00:04:21.294 "accel_get_stats", 00:04:21.294 "accel_set_options", 00:04:21.294 "accel_set_driver", 00:04:21.294 "accel_crypto_key_destroy", 00:04:21.294 "accel_crypto_keys_get", 00:04:21.294 "accel_crypto_key_create", 00:04:21.294 "accel_assign_opc", 00:04:21.295 "accel_get_module_info", 00:04:21.295 "accel_get_opc_assignments", 00:04:21.295 "vmd_rescan", 00:04:21.295 "vmd_remove_device", 00:04:21.295 "vmd_enable", 00:04:21.295 "sock_get_default_impl", 00:04:21.295 "sock_set_default_impl", 00:04:21.295 "sock_impl_set_options", 00:04:21.295 "sock_impl_get_options", 00:04:21.295 "iobuf_get_stats", 00:04:21.295 "iobuf_set_options", 00:04:21.295 "keyring_get_keys", 00:04:21.295 "vfu_tgt_set_base_path", 00:04:21.295 "framework_get_pci_devices", 00:04:21.295 "framework_get_config", 00:04:21.295 "framework_get_subsystems", 00:04:21.295 "fsdev_set_opts", 00:04:21.295 "fsdev_get_opts", 00:04:21.295 "trace_get_info", 00:04:21.295 "trace_get_tpoint_group_mask", 00:04:21.295 "trace_disable_tpoint_group", 00:04:21.295 "trace_enable_tpoint_group", 00:04:21.295 "trace_clear_tpoint_mask", 00:04:21.295 "trace_set_tpoint_mask", 00:04:21.295 "notify_get_notifications", 00:04:21.295 "notify_get_types", 00:04:21.295 "spdk_get_version", 00:04:21.295 "rpc_get_methods" 00:04:21.295 ] 00:04:21.295 11:05:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:21.295 11:05:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:21.295 11:05:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1515371 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1515371 ']' 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1515371 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1515371 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1515371' 00:04:21.295 killing process with pid 1515371 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1515371 00:04:21.295 11:05:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1515371 00:04:21.554 00:04:21.554 real 0m1.571s 00:04:21.554 user 0m2.881s 00:04:21.554 sys 0m0.457s 00:04:21.554 11:05:54 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.554 11:05:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:21.554 ************************************ 00:04:21.554 END TEST spdkcli_tcp 00:04:21.554 ************************************ 00:04:21.812 11:05:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:21.812 11:05:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.812 11:05:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.812 11:05:54 -- common/autotest_common.sh@10 -- # set +x 00:04:21.812 ************************************ 00:04:21.812 START TEST dpdk_mem_utility 00:04:21.812 ************************************ 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:21.812 * Looking for test storage... 00:04:21.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.812 11:05:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.812 --rc genhtml_branch_coverage=1 00:04:21.812 --rc genhtml_function_coverage=1 00:04:21.812 --rc genhtml_legend=1 00:04:21.812 --rc geninfo_all_blocks=1 00:04:21.812 --rc geninfo_unexecuted_blocks=1 00:04:21.812 00:04:21.812 ' 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.812 --rc genhtml_branch_coverage=1 00:04:21.812 --rc genhtml_function_coverage=1 00:04:21.812 --rc genhtml_legend=1 00:04:21.812 --rc geninfo_all_blocks=1 00:04:21.812 --rc geninfo_unexecuted_blocks=1 00:04:21.812 00:04:21.812 ' 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.812 --rc genhtml_branch_coverage=1 00:04:21.812 --rc genhtml_function_coverage=1 00:04:21.812 --rc genhtml_legend=1 00:04:21.812 --rc geninfo_all_blocks=1 00:04:21.812 --rc geninfo_unexecuted_blocks=1 00:04:21.812 00:04:21.812 ' 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.812 --rc genhtml_branch_coverage=1 00:04:21.812 --rc genhtml_function_coverage=1 00:04:21.812 --rc genhtml_legend=1 00:04:21.812 --rc geninfo_all_blocks=1 00:04:21.812 --rc geninfo_unexecuted_blocks=1 00:04:21.812 00:04:21.812 ' 00:04:21.812 11:05:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:21.812 11:05:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1515714 00:04:21.812 11:05:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1515714 00:04:21.812 11:05:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1515714 ']' 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.812 11:05:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:22.070 [2024-12-06 11:05:54.783558] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:22.070 [2024-12-06 11:05:54.783602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515714 ] 00:04:22.070 [2024-12-06 11:05:54.856605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.070 [2024-12-06 11:05:54.894078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.005 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.005 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:23.005 11:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:23.005 11:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:23.005 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.005 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:23.005 { 00:04:23.005 "filename": "/tmp/spdk_mem_dump.txt" 00:04:23.005 } 00:04:23.005 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.005 11:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:23.005 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:23.005 1 heaps totaling size 818.000000 MiB 00:04:23.005 size: 818.000000 MiB heap id: 0 00:04:23.005 end heaps---------- 00:04:23.005 9 mempools totaling size 603.782043 MiB 00:04:23.005 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:23.005 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:23.005 size: 100.555481 MiB name: bdev_io_1515714 00:04:23.005 size: 50.003479 MiB name: msgpool_1515714 00:04:23.005 size: 36.509338 MiB name: fsdev_io_1515714 00:04:23.005 size: 21.763794 MiB name: PDU_Pool 00:04:23.005 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:23.005 size: 4.133484 MiB name: evtpool_1515714 00:04:23.005 size: 0.026123 MiB name: Session_Pool 00:04:23.005 end mempools------- 00:04:23.005 6 memzones totaling size 4.142822 MiB 00:04:23.005 size: 1.000366 MiB name: RG_ring_0_1515714 00:04:23.005 size: 1.000366 MiB name: RG_ring_1_1515714 00:04:23.005 size: 1.000366 MiB name: RG_ring_4_1515714 00:04:23.005 size: 1.000366 MiB name: RG_ring_5_1515714 00:04:23.005 size: 0.125366 MiB name: RG_ring_2_1515714 00:04:23.005 size: 0.015991 MiB name: RG_ring_3_1515714 00:04:23.005 end memzones------- 00:04:23.005 11:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:23.005 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:23.005 list of free elements. size: 10.852478 MiB 00:04:23.005 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:23.005 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:23.005 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:23.005 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:23.005 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:23.005 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:23.005 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:23.005 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:23.005 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:23.005 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:23.005 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:23.005 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:23.006 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:23.006 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:23.006 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:23.006 list of standard malloc elements. size: 199.218628 MiB 00:04:23.006 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:23.006 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:23.006 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:23.006 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:23.006 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:23.006 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:23.006 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:23.006 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:23.006 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:23.006 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:23.006 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:23.006 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:23.006 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:23.006 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:23.006 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:23.006 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:23.006 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:23.006 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:23.006 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:23.006 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:23.006 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:23.006 list of memzone associated elements. size: 607.928894 MiB 00:04:23.006 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:23.006 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:23.006 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:23.006 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:23.006 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:23.006 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1515714_0 00:04:23.006 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:23.006 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1515714_0 00:04:23.006 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:23.006 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1515714_0 00:04:23.006 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:23.006 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:23.006 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:23.006 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:23.006 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:23.006 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1515714_0 00:04:23.006 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:23.006 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1515714 00:04:23.006 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:23.006 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1515714 00:04:23.006 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:23.006 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:23.006 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:23.006 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:23.006 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:23.006 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:23.006 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:23.006 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:23.006 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:23.006 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1515714 00:04:23.006 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:23.006 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1515714 00:04:23.006 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:23.006 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1515714 00:04:23.006 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:23.006 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1515714 00:04:23.006 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:23.006 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1515714 00:04:23.006 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:23.006 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1515714 00:04:23.006 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:23.006 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:23.006 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:23.006 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:23.006 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:23.006 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:23.006 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:23.006 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1515714 00:04:23.006 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:23.006 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1515714 00:04:23.006 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:23.006 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:23.006 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:23.006 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:23.006 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:23.006 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1515714 00:04:23.006 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:23.006 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:23.006 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:23.006 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1515714 00:04:23.006 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:23.006 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1515714 00:04:23.006 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:23.006 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1515714 00:04:23.006 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:23.006 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:23.006 11:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:23.006 11:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1515714 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1515714 ']' 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1515714 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1515714 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1515714' 00:04:23.006 killing process with pid 1515714 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1515714 00:04:23.006 11:05:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1515714 00:04:23.266 00:04:23.266 real 0m1.493s 00:04:23.266 user 0m1.564s 00:04:23.266 sys 0m0.427s 00:04:23.266 11:05:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.266 11:05:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:23.266 ************************************ 00:04:23.266 END TEST dpdk_mem_utility 00:04:23.266 ************************************ 00:04:23.266 11:05:56 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:23.266 11:05:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.266 11:05:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.266 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:04:23.266 ************************************ 00:04:23.266 START TEST event 00:04:23.266 ************************************ 00:04:23.266 11:05:56 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:23.266 * Looking for test storage... 00:04:23.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:23.524 11:05:56 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.524 11:05:56 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.524 11:05:56 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.524 11:05:56 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.524 11:05:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.524 11:05:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.524 11:05:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.524 11:05:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.524 11:05:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.524 11:05:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.524 11:05:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.524 11:05:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.524 11:05:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.524 11:05:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.524 11:05:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.524 11:05:56 event -- scripts/common.sh@344 -- # case "$op" in 00:04:23.524 11:05:56 event -- scripts/common.sh@345 -- # : 1 00:04:23.524 11:05:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.524 11:05:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.524 11:05:56 event -- scripts/common.sh@365 -- # decimal 1 00:04:23.524 11:05:56 event -- scripts/common.sh@353 -- # local d=1 00:04:23.525 11:05:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.525 11:05:56 event -- scripts/common.sh@355 -- # echo 1 00:04:23.525 11:05:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.525 11:05:56 event -- scripts/common.sh@366 -- # decimal 2 00:04:23.525 11:05:56 event -- scripts/common.sh@353 -- # local d=2 00:04:23.525 11:05:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.525 11:05:56 event -- scripts/common.sh@355 -- # echo 2 00:04:23.525 11:05:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.525 11:05:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.525 11:05:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.525 11:05:56 event -- scripts/common.sh@368 -- # return 0 00:04:23.525 11:05:56 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.525 11:05:56 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.525 --rc genhtml_branch_coverage=1 00:04:23.525 --rc genhtml_function_coverage=1 00:04:23.525 --rc genhtml_legend=1 00:04:23.525 --rc geninfo_all_blocks=1 00:04:23.525 --rc geninfo_unexecuted_blocks=1 00:04:23.525 00:04:23.525 ' 00:04:23.525 11:05:56 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.525 --rc genhtml_branch_coverage=1 00:04:23.525 --rc genhtml_function_coverage=1 00:04:23.525 --rc genhtml_legend=1 00:04:23.525 --rc geninfo_all_blocks=1 00:04:23.525 --rc geninfo_unexecuted_blocks=1 00:04:23.525 00:04:23.525 ' 00:04:23.525 11:05:56 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.525 --rc genhtml_branch_coverage=1 00:04:23.525 --rc genhtml_function_coverage=1 00:04:23.525 --rc genhtml_legend=1 00:04:23.525 --rc geninfo_all_blocks=1 00:04:23.525 --rc geninfo_unexecuted_blocks=1 00:04:23.525 00:04:23.525 ' 00:04:23.525 11:05:56 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.525 --rc genhtml_branch_coverage=1 00:04:23.525 --rc genhtml_function_coverage=1 00:04:23.525 --rc genhtml_legend=1 00:04:23.525 --rc geninfo_all_blocks=1 00:04:23.525 --rc geninfo_unexecuted_blocks=1 00:04:23.525 00:04:23.525 ' 00:04:23.525 11:05:56 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:23.525 11:05:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:23.525 11:05:56 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:23.525 11:05:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:23.525 11:05:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.525 11:05:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.525 ************************************ 00:04:23.525 START TEST event_perf 00:04:23.525 ************************************ 00:04:23.525 11:05:56 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:23.525 Running I/O for 1 seconds...[2024-12-06 11:05:56.345116] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:23.525 [2024-12-06 11:05:56.345183] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516047 ] 00:04:23.525 [2024-12-06 11:05:56.426123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:23.802 [2024-12-06 11:05:56.467517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.802 [2024-12-06 11:05:56.467631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:23.802 [2024-12-06 11:05:56.467744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.802 Running I/O for 1 seconds...[2024-12-06 11:05:56.467746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:24.737 00:04:24.737 lcore 0: 211891 00:04:24.737 lcore 1: 211890 00:04:24.737 lcore 2: 211890 00:04:24.737 lcore 3: 211889 00:04:24.737 done. 00:04:24.737 00:04:24.737 real 0m1.182s 00:04:24.737 user 0m4.098s 00:04:24.737 sys 0m0.080s 00:04:24.737 11:05:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.737 11:05:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:24.737 ************************************ 00:04:24.737 END TEST event_perf 00:04:24.737 ************************************ 00:04:24.737 11:05:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:24.737 11:05:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:24.737 11:05:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.737 11:05:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.737 ************************************ 00:04:24.737 START TEST event_reactor 00:04:24.737 ************************************ 00:04:24.737 11:05:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:24.737 [2024-12-06 11:05:57.596456] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:24.737 [2024-12-06 11:05:57.596525] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516331 ] 00:04:24.737 [2024-12-06 11:05:57.673671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.995 [2024-12-06 11:05:57.709618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.931 test_start 00:04:25.931 oneshot 00:04:25.931 tick 100 00:04:25.931 tick 100 00:04:25.931 tick 250 00:04:25.931 tick 100 00:04:25.931 tick 100 00:04:25.931 tick 250 00:04:25.931 tick 100 00:04:25.931 tick 500 00:04:25.931 tick 100 00:04:25.931 tick 100 00:04:25.931 tick 250 00:04:25.931 tick 100 00:04:25.931 tick 100 00:04:25.931 test_end 00:04:25.931 00:04:25.931 real 0m1.172s 00:04:25.931 user 0m1.097s 00:04:25.931 sys 0m0.072s 00:04:25.931 11:05:58 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.931 11:05:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:25.931 ************************************ 00:04:25.931 END TEST event_reactor 00:04:25.931 ************************************ 00:04:25.931 11:05:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:25.931 11:05:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:25.931 11:05:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.931 11:05:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.931 ************************************ 00:04:25.931 START TEST event_reactor_perf 00:04:25.931 ************************************ 00:04:25.931 11:05:58 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:25.931 [2024-12-06 11:05:58.835095] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:25.931 [2024-12-06 11:05:58.835170] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516609 ] 00:04:26.190 [2024-12-06 11:05:58.911568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.190 [2024-12-06 11:05:58.946150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.126 test_start 00:04:27.126 test_end 00:04:27.126 Performance: 558060 events per second 00:04:27.126 00:04:27.126 real 0m1.166s 00:04:27.126 user 0m1.096s 00:04:27.126 sys 0m0.066s 00:04:27.126 11:05:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.126 11:05:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:27.126 ************************************ 00:04:27.126 END TEST event_reactor_perf 00:04:27.126 ************************************ 00:04:27.126 11:06:00 event -- event/event.sh@49 -- # uname -s 00:04:27.126 11:06:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:27.126 11:06:00 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:27.126 11:06:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.126 11:06:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.126 11:06:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.126 ************************************ 00:04:27.126 START TEST event_scheduler 00:04:27.126 ************************************ 00:04:27.126 11:06:00 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:27.385 * Looking for test storage... 00:04:27.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.385 11:06:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.385 --rc genhtml_branch_coverage=1 00:04:27.385 --rc genhtml_function_coverage=1 00:04:27.385 --rc genhtml_legend=1 00:04:27.385 --rc geninfo_all_blocks=1 00:04:27.385 --rc geninfo_unexecuted_blocks=1 00:04:27.385 00:04:27.385 ' 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.385 --rc genhtml_branch_coverage=1 00:04:27.385 --rc genhtml_function_coverage=1 00:04:27.385 --rc genhtml_legend=1 00:04:27.385 --rc geninfo_all_blocks=1 00:04:27.385 --rc geninfo_unexecuted_blocks=1 00:04:27.385 00:04:27.385 ' 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.385 --rc genhtml_branch_coverage=1 00:04:27.385 --rc genhtml_function_coverage=1 00:04:27.385 --rc genhtml_legend=1 00:04:27.385 --rc geninfo_all_blocks=1 00:04:27.385 --rc geninfo_unexecuted_blocks=1 00:04:27.385 00:04:27.385 ' 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.385 --rc genhtml_branch_coverage=1 00:04:27.385 --rc genhtml_function_coverage=1 00:04:27.385 --rc genhtml_legend=1 00:04:27.385 --rc geninfo_all_blocks=1 00:04:27.385 --rc geninfo_unexecuted_blocks=1 00:04:27.385 00:04:27.385 ' 00:04:27.385 11:06:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:27.385 11:06:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1516954 00:04:27.385 11:06:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:27.385 11:06:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.385 11:06:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1516954 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1516954 ']' 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.385 11:06:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.386 11:06:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.386 11:06:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.386 [2024-12-06 11:06:00.272550] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:27.386 [2024-12-06 11:06:00.272601] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516954 ] 00:04:27.644 [2024-12-06 11:06:00.346664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:27.644 [2024-12-06 11:06:00.389126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.644 [2024-12-06 11:06:00.389243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.645 [2024-12-06 11:06:00.389313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:27.645 [2024-12-06 11:06:00.389314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:28.213 11:06:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.213 11:06:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:28.213 11:06:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:28.213 11:06:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.213 11:06:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.213 [2024-12-06 11:06:01.103783] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:28.213 [2024-12-06 11:06:01.103803] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:28.213 [2024-12-06 11:06:01.103811] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:28.213 [2024-12-06 11:06:01.103817] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:28.213 [2024-12-06 11:06:01.103821] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:28.213 11:06:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.213 11:06:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:28.213 11:06:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.213 11:06:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.472 [2024-12-06 11:06:01.177585] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:28.472 11:06:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.472 11:06:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:28.472 11:06:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.472 11:06:01 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.472 11:06:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.472 ************************************ 00:04:28.472 START TEST scheduler_create_thread 00:04:28.472 ************************************ 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.472 2 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.472 3 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.472 4 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.472 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.472 5 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.473 6 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.473 7 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.473 8 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.473 9 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.473 10 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.473 11:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.850 11:06:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.850 11:06:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:29.850 11:06:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:29.850 11:06:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.850 11:06:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.229 11:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.229 00:04:31.229 real 0m2.621s 00:04:31.229 user 0m0.022s 00:04:31.229 sys 0m0.006s 00:04:31.229 11:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.229 11:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.229 ************************************ 00:04:31.229 END TEST scheduler_create_thread 00:04:31.229 ************************************ 00:04:31.229 11:06:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:31.229 11:06:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1516954 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1516954 ']' 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1516954 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1516954 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1516954' 00:04:31.229 killing process with pid 1516954 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1516954 00:04:31.229 11:06:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1516954 00:04:31.487 [2024-12-06 11:06:04.311725] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:31.745 00:04:31.745 real 0m4.422s 00:04:31.745 user 0m8.427s 00:04:31.745 sys 0m0.403s 00:04:31.745 11:06:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.745 11:06:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.745 ************************************ 00:04:31.745 END TEST event_scheduler 00:04:31.745 ************************************ 00:04:31.745 11:06:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:31.745 11:06:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:31.745 11:06:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.745 11:06:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.745 11:06:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.745 ************************************ 00:04:31.745 START TEST app_repeat 00:04:31.745 ************************************ 00:04:31.745 11:06:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1517895 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1517895' 00:04:31.745 Process app_repeat pid: 1517895 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:31.745 spdk_app_start Round 0 00:04:31.745 11:06:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1517895 /var/tmp/spdk-nbd.sock 00:04:31.745 11:06:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1517895 ']' 00:04:31.745 11:06:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.745 11:06:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.745 11:06:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.745 11:06:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.745 11:06:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.745 [2024-12-06 11:06:04.591014] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:31.745 [2024-12-06 11:06:04.591073] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517895 ] 00:04:31.745 [2024-12-06 11:06:04.663981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.002 [2024-12-06 11:06:04.705602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.002 [2024-12-06 11:06:04.705603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.002 11:06:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.002 11:06:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:32.002 11:06:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.260 Malloc0 00:04:32.260 11:06:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.260 Malloc1 00:04:32.260 11:06:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.261 11:06:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:32.519 /dev/nbd0 00:04:32.519 11:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:32.519 11:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.519 1+0 records in 00:04:32.519 1+0 records out 00:04:32.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180555 s, 22.7 MB/s 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:32.519 11:06:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:32.519 11:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.519 11:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.519 11:06:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:32.778 /dev/nbd1 00:04:32.778 11:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:32.778 11:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.778 1+0 records in 00:04:32.778 1+0 records out 00:04:32.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023359 s, 17.5 MB/s 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:32.778 11:06:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:32.778 11:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.778 11:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.778 11:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.778 11:06:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.778 11:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:33.036 { 00:04:33.036 "nbd_device": "/dev/nbd0", 00:04:33.036 "bdev_name": "Malloc0" 00:04:33.036 }, 00:04:33.036 { 00:04:33.036 "nbd_device": "/dev/nbd1", 00:04:33.036 "bdev_name": "Malloc1" 00:04:33.036 } 00:04:33.036 ]' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:33.036 { 00:04:33.036 "nbd_device": "/dev/nbd0", 00:04:33.036 "bdev_name": "Malloc0" 00:04:33.036 }, 00:04:33.036 { 00:04:33.036 "nbd_device": "/dev/nbd1", 00:04:33.036 "bdev_name": "Malloc1" 00:04:33.036 } 00:04:33.036 ]' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:33.036 /dev/nbd1' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:33.036 /dev/nbd1' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:33.036 256+0 records in 00:04:33.036 256+0 records out 00:04:33.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106357 s, 98.6 MB/s 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:33.036 256+0 records in 00:04:33.036 256+0 records out 00:04:33.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129026 s, 81.3 MB/s 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:33.036 256+0 records in 00:04:33.036 256+0 records out 00:04:33.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140703 s, 74.5 MB/s 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:33.036 11:06:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:33.037 11:06:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.037 11:06:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.295 11:06:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.553 11:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.810 11:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:33.810 11:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:33.810 11:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.811 11:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:33.811 11:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:33.811 11:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.811 11:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:33.811 11:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:33.811 11:06:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:33.811 11:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:33.811 11:06:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:33.811 11:06:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:33.811 11:06:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:34.068 11:06:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:34.068 [2024-12-06 11:06:06.952054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:34.068 [2024-12-06 11:06:06.986275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.068 [2024-12-06 11:06:06.986276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.326 [2024-12-06 11:06:07.026345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:34.326 [2024-12-06 11:06:07.026382] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:37.608 11:06:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:37.608 11:06:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:37.608 spdk_app_start Round 1 00:04:37.608 11:06:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1517895 /var/tmp/spdk-nbd.sock 00:04:37.608 11:06:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1517895 ']' 00:04:37.608 11:06:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.608 11:06:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.608 11:06:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.608 11:06:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.608 11:06:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:37.608 11:06:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.608 11:06:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:37.608 11:06:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.608 Malloc0 00:04:37.608 11:06:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.608 Malloc1 00:04:37.608 11:06:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.608 11:06:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:37.867 /dev/nbd0 00:04:37.867 11:06:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:37.867 11:06:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.867 1+0 records in 00:04:37.867 1+0 records out 00:04:37.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197961 s, 20.7 MB/s 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:37.867 11:06:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.868 11:06:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:37.868 11:06:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:37.868 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.868 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.868 11:06:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:38.126 /dev/nbd1 00:04:38.126 11:06:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:38.126 11:06:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.126 1+0 records in 00:04:38.126 1+0 records out 00:04:38.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191267 s, 21.4 MB/s 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:38.126 11:06:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:38.126 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.126 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.127 11:06:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.127 11:06:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.127 11:06:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.127 11:06:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:38.127 { 00:04:38.127 "nbd_device": "/dev/nbd0", 00:04:38.127 "bdev_name": "Malloc0" 00:04:38.127 }, 00:04:38.127 { 00:04:38.127 "nbd_device": "/dev/nbd1", 00:04:38.127 "bdev_name": "Malloc1" 00:04:38.127 } 00:04:38.127 ]' 00:04:38.127 11:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:38.127 { 00:04:38.127 "nbd_device": "/dev/nbd0", 00:04:38.127 "bdev_name": "Malloc0" 00:04:38.127 }, 00:04:38.127 { 00:04:38.127 "nbd_device": "/dev/nbd1", 00:04:38.127 "bdev_name": "Malloc1" 00:04:38.127 } 00:04:38.127 ]' 00:04:38.127 11:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:38.385 /dev/nbd1' 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:38.385 /dev/nbd1' 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:38.385 256+0 records in 00:04:38.385 256+0 records out 00:04:38.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103367 s, 101 MB/s 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:38.385 256+0 records in 00:04:38.385 256+0 records out 00:04:38.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012958 s, 80.9 MB/s 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.385 256+0 records in 00:04:38.385 256+0 records out 00:04:38.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014006 s, 74.9 MB/s 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.385 11:06:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.643 11:06:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.902 11:06:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.902 11:06:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:39.161 11:06:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:39.420 [2024-12-06 11:06:12.141766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.420 [2024-12-06 11:06:12.174949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.420 [2024-12-06 11:06:12.174950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.420 [2024-12-06 11:06:12.215925] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:39.420 [2024-12-06 11:06:12.215963] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:42.704 11:06:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:42.704 11:06:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:42.704 spdk_app_start Round 2 00:04:42.704 11:06:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1517895 /var/tmp/spdk-nbd.sock 00:04:42.704 11:06:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1517895 ']' 00:04:42.704 11:06:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.704 11:06:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.704 11:06:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.704 11:06:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.704 11:06:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.704 11:06:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.704 11:06:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:42.704 11:06:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.704 Malloc0 00:04:42.704 11:06:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.704 Malloc1 00:04:42.704 11:06:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.704 11:06:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.962 /dev/nbd0 00:04:42.962 11:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.962 11:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.962 1+0 records in 00:04:42.962 1+0 records out 00:04:42.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233549 s, 17.5 MB/s 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:42.962 11:06:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:42.962 11:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.962 11:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.962 11:06:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:43.220 /dev/nbd1 00:04:43.220 11:06:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:43.220 11:06:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.220 1+0 records in 00:04:43.220 1+0 records out 00:04:43.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170606 s, 24.0 MB/s 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.220 11:06:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.220 11:06:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.220 11:06:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.220 11:06:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.220 11:06:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.220 11:06:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:43.478 { 00:04:43.478 "nbd_device": "/dev/nbd0", 00:04:43.478 "bdev_name": "Malloc0" 00:04:43.478 }, 00:04:43.478 { 00:04:43.478 "nbd_device": "/dev/nbd1", 00:04:43.478 "bdev_name": "Malloc1" 00:04:43.478 } 00:04:43.478 ]' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:43.478 { 00:04:43.478 "nbd_device": "/dev/nbd0", 00:04:43.478 "bdev_name": "Malloc0" 00:04:43.478 }, 00:04:43.478 { 00:04:43.478 "nbd_device": "/dev/nbd1", 00:04:43.478 "bdev_name": "Malloc1" 00:04:43.478 } 00:04:43.478 ]' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:43.478 /dev/nbd1' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:43.478 /dev/nbd1' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:43.478 256+0 records in 00:04:43.478 256+0 records out 00:04:43.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107536 s, 97.5 MB/s 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:43.478 256+0 records in 00:04:43.478 256+0 records out 00:04:43.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130032 s, 80.6 MB/s 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.478 256+0 records in 00:04:43.478 256+0 records out 00:04:43.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013975 s, 75.0 MB/s 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.478 11:06:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.736 11:06:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.995 11:06:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:44.302 11:06:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:44.302 11:06:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:44.302 11:06:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:44.623 [2024-12-06 11:06:17.324596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.623 [2024-12-06 11:06:17.358107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.623 [2024-12-06 11:06:17.358110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.623 [2024-12-06 11:06:17.398020] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.623 [2024-12-06 11:06:17.398061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.947 11:06:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1517895 /var/tmp/spdk-nbd.sock 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1517895 ']' 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:47.947 11:06:20 event.app_repeat -- event/event.sh@39 -- # killprocess 1517895 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1517895 ']' 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1517895 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1517895 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1517895' 00:04:47.947 killing process with pid 1517895 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1517895 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1517895 00:04:47.947 spdk_app_start is called in Round 0. 00:04:47.947 Shutdown signal received, stop current app iteration 00:04:47.947 Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 reinitialization... 00:04:47.947 spdk_app_start is called in Round 1. 00:04:47.947 Shutdown signal received, stop current app iteration 00:04:47.947 Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 reinitialization... 00:04:47.947 spdk_app_start is called in Round 2. 00:04:47.947 Shutdown signal received, stop current app iteration 00:04:47.947 Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 reinitialization... 00:04:47.947 spdk_app_start is called in Round 3. 00:04:47.947 Shutdown signal received, stop current app iteration 00:04:47.947 11:06:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:47.947 11:06:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:47.947 00:04:47.947 real 0m16.003s 00:04:47.947 user 0m35.042s 00:04:47.947 sys 0m2.422s 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.947 11:06:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.947 ************************************ 00:04:47.947 END TEST app_repeat 00:04:47.947 ************************************ 00:04:47.947 11:06:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:47.947 11:06:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:47.947 11:06:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.947 11:06:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.947 11:06:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.947 ************************************ 00:04:47.947 START TEST cpu_locks 00:04:47.947 ************************************ 00:04:47.947 11:06:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:47.947 * Looking for test storage... 00:04:47.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:47.947 11:06:20 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.947 11:06:20 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.947 11:06:20 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.948 11:06:20 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.948 11:06:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:47.948 11:06:20 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.948 11:06:20 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.948 --rc genhtml_branch_coverage=1 00:04:47.948 --rc genhtml_function_coverage=1 00:04:47.948 --rc genhtml_legend=1 00:04:47.948 --rc geninfo_all_blocks=1 00:04:47.948 --rc geninfo_unexecuted_blocks=1 00:04:47.948 00:04:47.948 ' 00:04:47.948 11:06:20 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.948 --rc genhtml_branch_coverage=1 00:04:47.948 --rc genhtml_function_coverage=1 00:04:47.948 --rc genhtml_legend=1 00:04:47.948 --rc geninfo_all_blocks=1 00:04:47.948 --rc geninfo_unexecuted_blocks=1 00:04:47.948 00:04:47.948 ' 00:04:47.948 11:06:20 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.948 --rc genhtml_branch_coverage=1 00:04:47.948 --rc genhtml_function_coverage=1 00:04:47.948 --rc genhtml_legend=1 00:04:47.948 --rc geninfo_all_blocks=1 00:04:47.948 --rc geninfo_unexecuted_blocks=1 00:04:47.948 00:04:47.948 ' 00:04:47.948 11:06:20 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.948 --rc genhtml_branch_coverage=1 00:04:47.948 --rc genhtml_function_coverage=1 00:04:47.948 --rc genhtml_legend=1 00:04:47.948 --rc geninfo_all_blocks=1 00:04:47.948 --rc geninfo_unexecuted_blocks=1 00:04:47.948 00:04:47.948 ' 00:04:47.948 11:06:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:47.948 11:06:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:47.948 11:06:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:47.948 11:06:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:47.948 11:06:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.948 11:06:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.948 11:06:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.948 ************************************ 00:04:47.948 START TEST default_locks 00:04:47.948 ************************************ 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1521430 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1521430 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1521430 ']' 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.948 11:06:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.207 [2024-12-06 11:06:20.886899] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:48.207 [2024-12-06 11:06:20.886943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521430 ] 00:04:48.207 [2024-12-06 11:06:20.956668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.207 [2024-12-06 11:06:20.995527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.774 11:06:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.774 11:06:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:48.774 11:06:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1521430 00:04:48.774 11:06:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1521430 00:04:48.774 11:06:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.341 lslocks: write error 00:04:49.341 11:06:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1521430 00:04:49.341 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1521430 ']' 00:04:49.341 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1521430 00:04:49.341 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:49.341 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.341 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1521430 00:04:49.600 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.600 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.600 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1521430' 00:04:49.600 killing process with pid 1521430 00:04:49.600 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1521430 00:04:49.600 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1521430 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1521430 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1521430 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1521430 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1521430 ']' 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1521430) - No such process 00:04:49.858 ERROR: process (pid: 1521430) is no longer running 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:49.858 00:04:49.858 real 0m1.751s 00:04:49.858 user 0m1.844s 00:04:49.858 sys 0m0.584s 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.858 11:06:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.858 ************************************ 00:04:49.858 END TEST default_locks 00:04:49.858 ************************************ 00:04:49.858 11:06:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:49.858 11:06:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.858 11:06:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.858 11:06:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.858 ************************************ 00:04:49.858 START TEST default_locks_via_rpc 00:04:49.858 ************************************ 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1521955 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1521955 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1521955 ']' 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.858 11:06:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.858 [2024-12-06 11:06:22.708102] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:49.858 [2024-12-06 11:06:22.708145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521955 ] 00:04:49.858 [2024-12-06 11:06:22.779644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.116 [2024-12-06 11:06:22.818686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1521955 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1521955 00:04:50.684 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1521955 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1521955 ']' 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1521955 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1521955 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1521955' 00:04:51.252 killing process with pid 1521955 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1521955 00:04:51.252 11:06:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1521955 00:04:51.511 00:04:51.511 real 0m1.632s 00:04:51.511 user 0m1.717s 00:04:51.511 sys 0m0.546s 00:04:51.511 11:06:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.511 11:06:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.511 ************************************ 00:04:51.511 END TEST default_locks_via_rpc 00:04:51.511 ************************************ 00:04:51.511 11:06:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:51.511 11:06:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.511 11:06:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.511 11:06:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.511 ************************************ 00:04:51.511 START TEST non_locking_app_on_locked_coremask 00:04:51.511 ************************************ 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1522262 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1522262 /var/tmp/spdk.sock 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1522262 ']' 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.511 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.511 [2024-12-06 11:06:24.409846] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:51.511 [2024-12-06 11:06:24.409886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522262 ] 00:04:51.769 [2024-12-06 11:06:24.480993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.769 [2024-12-06 11:06:24.514371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1522276 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1522276 /var/tmp/spdk2.sock 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1522276 ']' 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.027 11:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.027 [2024-12-06 11:06:24.784110] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:52.027 [2024-12-06 11:06:24.784154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522276 ] 00:04:52.027 [2024-12-06 11:06:24.863813] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.027 [2024-12-06 11:06:24.863844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.028 [2024-12-06 11:06:24.937177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.962 11:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.962 11:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.962 11:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1522262 00:04:52.962 11:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1522262 00:04:52.962 11:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.221 lslocks: write error 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1522262 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1522262 ']' 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1522262 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1522262 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1522262' 00:04:53.221 killing process with pid 1522262 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1522262 00:04:53.221 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1522262 00:04:53.791 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1522276 00:04:53.791 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1522276 ']' 00:04:53.791 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1522276 00:04:53.791 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.791 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.791 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1522276 00:04:53.791 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.791 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.791 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1522276' 00:04:53.792 killing process with pid 1522276 00:04:53.792 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1522276 00:04:53.792 11:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1522276 00:04:54.358 00:04:54.358 real 0m2.669s 00:04:54.358 user 0m2.793s 00:04:54.358 sys 0m0.878s 00:04:54.358 11:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.358 11:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.358 ************************************ 00:04:54.358 END TEST non_locking_app_on_locked_coremask 00:04:54.358 ************************************ 00:04:54.358 11:06:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:54.358 11:06:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.358 11:06:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.358 11:06:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.358 ************************************ 00:04:54.358 START TEST locking_app_on_unlocked_coremask 00:04:54.358 ************************************ 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1522828 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1522828 /var/tmp/spdk.sock 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1522828 ']' 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.358 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.358 [2024-12-06 11:06:27.143028] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:54.358 [2024-12-06 11:06:27.143076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522828 ] 00:04:54.358 [2024-12-06 11:06:27.214827] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:54.358 [2024-12-06 11:06:27.214851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.358 [2024-12-06 11:06:27.253932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1522833 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1522833 /var/tmp/spdk2.sock 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1522833 ']' 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.616 11:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.616 [2024-12-06 11:06:27.516069] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:54.616 [2024-12-06 11:06:27.516114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522833 ] 00:04:54.872 [2024-12-06 11:06:27.594404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.872 [2024-12-06 11:06:27.668007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.438 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.438 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.438 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1522833 00:04:55.438 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1522833 00:04:55.438 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.004 lslocks: write error 00:04:56.004 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1522828 00:04:56.004 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1522828 ']' 00:04:56.004 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1522828 00:04:56.004 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:56.004 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.004 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1522828 00:04:56.262 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.262 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.262 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1522828' 00:04:56.262 killing process with pid 1522828 00:04:56.262 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1522828 00:04:56.262 11:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1522828 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1522833 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1522833 ']' 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1522833 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1522833 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1522833' 00:04:56.831 killing process with pid 1522833 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1522833 00:04:56.831 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1522833 00:04:57.089 00:04:57.089 real 0m2.805s 00:04:57.089 user 0m2.915s 00:04:57.089 sys 0m0.952s 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.089 ************************************ 00:04:57.089 END TEST locking_app_on_unlocked_coremask 00:04:57.089 ************************************ 00:04:57.089 11:06:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:57.089 11:06:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.089 11:06:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.089 11:06:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.089 ************************************ 00:04:57.089 START TEST locking_app_on_locked_coremask 00:04:57.089 ************************************ 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1523392 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1523392 /var/tmp/spdk.sock 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1523392 ']' 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.089 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.090 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.090 11:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.090 [2024-12-06 11:06:30.010703] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:57.090 [2024-12-06 11:06:30.010743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523392 ] 00:04:57.348 [2024-12-06 11:06:30.085873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.348 [2024-12-06 11:06:30.124681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1523395 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1523395 /var/tmp/spdk2.sock 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1523395 /var/tmp/spdk2.sock 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1523395 /var/tmp/spdk2.sock 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1523395 ']' 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.606 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.607 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.607 11:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.607 [2024-12-06 11:06:30.401724] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:57.607 [2024-12-06 11:06:30.401761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523395 ] 00:04:57.607 [2024-12-06 11:06:30.484213] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1523392 has claimed it. 00:04:57.607 [2024-12-06 11:06:30.484252] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:58.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1523395) - No such process 00:04:58.174 ERROR: process (pid: 1523395) is no longer running 00:04:58.174 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.174 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:58.174 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:58.174 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.174 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:58.174 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.174 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1523392 00:04:58.174 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1523392 00:04:58.174 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.433 lslocks: write error 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1523392 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1523392 ']' 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1523392 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1523392 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1523392' 00:04:58.433 killing process with pid 1523392 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1523392 00:04:58.433 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1523392 00:04:58.692 00:04:58.692 real 0m1.629s 00:04:58.692 user 0m1.733s 00:04:58.692 sys 0m0.528s 00:04:58.692 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.692 11:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.692 ************************************ 00:04:58.692 END TEST locking_app_on_locked_coremask 00:04:58.692 ************************************ 00:04:58.692 11:06:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:58.692 11:06:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.692 11:06:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.952 11:06:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.952 ************************************ 00:04:58.952 START TEST locking_overlapped_coremask 00:04:58.952 ************************************ 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1523687 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1523687 /var/tmp/spdk.sock 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1523687 ']' 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.952 11:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.952 [2024-12-06 11:06:31.717319] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:58.952 [2024-12-06 11:06:31.717357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523687 ] 00:04:58.952 [2024-12-06 11:06:31.789859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.952 [2024-12-06 11:06:31.831654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.952 [2024-12-06 11:06:31.831766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.952 [2024-12-06 11:06:31.831767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1523805 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1523805 /var/tmp/spdk2.sock 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1523805 /var/tmp/spdk2.sock 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1523805 /var/tmp/spdk2.sock 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1523805 ']' 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.890 11:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.890 [2024-12-06 11:06:32.583823] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:04:59.890 [2024-12-06 11:06:32.583868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523805 ] 00:04:59.890 [2024-12-06 11:06:32.669501] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1523687 has claimed it. 00:04:59.890 [2024-12-06 11:06:32.669538] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:00.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1523805) - No such process 00:05:00.458 ERROR: process (pid: 1523805) is no longer running 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1523687 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1523687 ']' 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1523687 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1523687 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1523687' 00:05:00.458 killing process with pid 1523687 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1523687 00:05:00.458 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1523687 00:05:00.717 00:05:00.717 real 0m1.904s 00:05:00.717 user 0m5.506s 00:05:00.717 sys 0m0.411s 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.717 ************************************ 00:05:00.717 END TEST locking_overlapped_coremask 00:05:00.717 ************************************ 00:05:00.717 11:06:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:00.717 11:06:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.717 11:06:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.717 11:06:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.717 ************************************ 00:05:00.717 START TEST locking_overlapped_coremask_via_rpc 00:05:00.717 ************************************ 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1523994 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1523994 /var/tmp/spdk.sock 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1523994 ']' 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.717 11:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.976 [2024-12-06 11:06:33.690994] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:05:00.976 [2024-12-06 11:06:33.691032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523994 ] 00:05:00.976 [2024-12-06 11:06:33.763781] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.976 [2024-12-06 11:06:33.763811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.976 [2024-12-06 11:06:33.800584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.976 [2024-12-06 11:06:33.800697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.976 [2024-12-06 11:06:33.800698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1524258 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1524258 /var/tmp/spdk2.sock 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1524258 ']' 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.915 11:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.915 [2024-12-06 11:06:34.544185] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:05:01.915 [2024-12-06 11:06:34.544228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524258 ] 00:05:01.915 [2024-12-06 11:06:34.627872] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.915 [2024-12-06 11:06:34.627902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.915 [2024-12-06 11:06:34.708740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.915 [2024-12-06 11:06:34.712109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.915 [2024-12-06 11:06:34.712110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.482 [2024-12-06 11:06:35.352125] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1523994 has claimed it. 00:05:02.482 request: 00:05:02.482 { 00:05:02.482 "method": "framework_enable_cpumask_locks", 00:05:02.482 "req_id": 1 00:05:02.482 } 00:05:02.482 Got JSON-RPC error response 00:05:02.482 response: 00:05:02.482 { 00:05:02.482 "code": -32603, 00:05:02.482 "message": "Failed to claim CPU core: 2" 00:05:02.482 } 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1523994 /var/tmp/spdk.sock 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1523994 ']' 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.482 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.742 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.742 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.742 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1524258 /var/tmp/spdk2.sock 00:05:02.742 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1524258 ']' 00:05:02.742 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.742 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.742 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.742 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.742 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.000 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.000 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.000 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:03.000 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:03.000 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:03.000 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:03.000 00:05:03.000 real 0m2.123s 00:05:03.000 user 0m0.890s 00:05:03.000 sys 0m0.166s 00:05:03.000 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.000 11:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.000 ************************************ 00:05:03.000 END TEST locking_overlapped_coremask_via_rpc 00:05:03.000 ************************************ 00:05:03.000 11:06:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:03.000 11:06:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1523994 ]] 00:05:03.000 11:06:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1523994 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1523994 ']' 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1523994 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1523994 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1523994' 00:05:03.001 killing process with pid 1523994 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1523994 00:05:03.001 11:06:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1523994 00:05:03.259 11:06:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1524258 ]] 00:05:03.259 11:06:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1524258 00:05:03.259 11:06:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1524258 ']' 00:05:03.259 11:06:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1524258 00:05:03.259 11:06:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.259 11:06:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.259 11:06:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1524258 00:05:03.518 11:06:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:03.518 11:06:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:03.518 11:06:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1524258' 00:05:03.518 killing process with pid 1524258 00:05:03.518 11:06:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1524258 00:05:03.518 11:06:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1524258 00:05:03.778 11:06:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.778 11:06:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:03.778 11:06:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1523994 ]] 00:05:03.778 11:06:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1523994 00:05:03.778 11:06:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1523994 ']' 00:05:03.778 11:06:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1523994 00:05:03.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1523994) - No such process 00:05:03.778 11:06:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1523994 is not found' 00:05:03.778 Process with pid 1523994 is not found 00:05:03.778 11:06:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1524258 ]] 00:05:03.778 11:06:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1524258 00:05:03.778 11:06:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1524258 ']' 00:05:03.778 11:06:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1524258 00:05:03.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1524258) - No such process 00:05:03.778 11:06:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1524258 is not found' 00:05:03.778 Process with pid 1524258 is not found 00:05:03.778 11:06:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.778 00:05:03.778 real 0m15.890s 00:05:03.778 user 0m28.260s 00:05:03.778 sys 0m5.037s 00:05:03.778 11:06:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.778 11:06:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.778 ************************************ 00:05:03.778 END TEST cpu_locks 00:05:03.778 ************************************ 00:05:03.778 00:05:03.778 real 0m40.433s 00:05:03.778 user 1m18.280s 00:05:03.778 sys 0m8.456s 00:05:03.778 11:06:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.778 11:06:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.778 ************************************ 00:05:03.778 END TEST event 00:05:03.778 ************************************ 00:05:03.778 11:06:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:03.778 11:06:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.778 11:06:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.778 11:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:03.778 ************************************ 00:05:03.778 START TEST thread 00:05:03.778 ************************************ 00:05:03.778 11:06:36 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:03.778 * Looking for test storage... 00:05:03.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:03.778 11:06:36 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.778 11:06:36 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.778 11:06:36 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.038 11:06:36 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.038 11:06:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.038 11:06:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.038 11:06:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.038 11:06:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.038 11:06:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.038 11:06:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.038 11:06:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.038 11:06:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.038 11:06:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.038 11:06:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.038 11:06:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.038 11:06:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:04.038 11:06:36 thread -- scripts/common.sh@345 -- # : 1 00:05:04.038 11:06:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.038 11:06:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.038 11:06:36 thread -- scripts/common.sh@365 -- # decimal 1 00:05:04.038 11:06:36 thread -- scripts/common.sh@353 -- # local d=1 00:05:04.038 11:06:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.038 11:06:36 thread -- scripts/common.sh@355 -- # echo 1 00:05:04.038 11:06:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.038 11:06:36 thread -- scripts/common.sh@366 -- # decimal 2 00:05:04.038 11:06:36 thread -- scripts/common.sh@353 -- # local d=2 00:05:04.038 11:06:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.038 11:06:36 thread -- scripts/common.sh@355 -- # echo 2 00:05:04.038 11:06:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.038 11:06:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.038 11:06:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.038 11:06:36 thread -- scripts/common.sh@368 -- # return 0 00:05:04.038 11:06:36 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.038 11:06:36 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.038 --rc genhtml_branch_coverage=1 00:05:04.038 --rc genhtml_function_coverage=1 00:05:04.038 --rc genhtml_legend=1 00:05:04.038 --rc geninfo_all_blocks=1 00:05:04.038 --rc geninfo_unexecuted_blocks=1 00:05:04.038 00:05:04.038 ' 00:05:04.038 11:06:36 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.038 --rc genhtml_branch_coverage=1 00:05:04.038 --rc genhtml_function_coverage=1 00:05:04.038 --rc genhtml_legend=1 00:05:04.038 --rc geninfo_all_blocks=1 00:05:04.038 --rc geninfo_unexecuted_blocks=1 00:05:04.038 00:05:04.038 ' 00:05:04.038 11:06:36 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.038 --rc genhtml_branch_coverage=1 00:05:04.038 --rc genhtml_function_coverage=1 00:05:04.038 --rc genhtml_legend=1 00:05:04.038 --rc geninfo_all_blocks=1 00:05:04.038 --rc geninfo_unexecuted_blocks=1 00:05:04.038 00:05:04.038 ' 00:05:04.038 11:06:36 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.038 --rc genhtml_branch_coverage=1 00:05:04.038 --rc genhtml_function_coverage=1 00:05:04.038 --rc genhtml_legend=1 00:05:04.038 --rc geninfo_all_blocks=1 00:05:04.038 --rc geninfo_unexecuted_blocks=1 00:05:04.038 00:05:04.038 ' 00:05:04.038 11:06:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:04.038 11:06:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:04.038 11:06:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.038 11:06:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.038 ************************************ 00:05:04.038 START TEST thread_poller_perf 00:05:04.038 ************************************ 00:05:04.038 11:06:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:04.038 [2024-12-06 11:06:36.852652] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:05:04.038 [2024-12-06 11:06:36.852723] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524716 ] 00:05:04.038 [2024-12-06 11:06:36.934319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.038 [2024-12-06 11:06:36.971201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.038 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:05.417 [2024-12-06T10:06:38.355Z] ====================================== 00:05:05.417 [2024-12-06T10:06:38.355Z] busy:2204538934 (cyc) 00:05:05.417 [2024-12-06T10:06:38.355Z] total_run_count: 461000 00:05:05.417 [2024-12-06T10:06:38.355Z] tsc_hz: 2200000000 (cyc) 00:05:05.417 [2024-12-06T10:06:38.355Z] ====================================== 00:05:05.417 [2024-12-06T10:06:38.355Z] poller_cost: 4782 (cyc), 2173 (nsec) 00:05:05.417 00:05:05.417 real 0m1.179s 00:05:05.417 user 0m1.104s 00:05:05.417 sys 0m0.071s 00:05:05.417 11:06:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.417 11:06:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.417 ************************************ 00:05:05.417 END TEST thread_poller_perf 00:05:05.417 ************************************ 00:05:05.417 11:06:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.417 11:06:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:05.417 11:06:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.417 11:06:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.417 ************************************ 00:05:05.417 START TEST thread_poller_perf 00:05:05.417 ************************************ 00:05:05.417 11:06:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.417 [2024-12-06 11:06:38.102343] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:05:05.417 [2024-12-06 11:06:38.102416] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524923 ] 00:05:05.417 [2024-12-06 11:06:38.178315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.417 [2024-12-06 11:06:38.215227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.417 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:06.355 [2024-12-06T10:06:39.293Z] ====================================== 00:05:06.355 [2024-12-06T10:06:39.293Z] busy:2201242642 (cyc) 00:05:06.355 [2024-12-06T10:06:39.293Z] total_run_count: 5675000 00:05:06.355 [2024-12-06T10:06:39.293Z] tsc_hz: 2200000000 (cyc) 00:05:06.355 [2024-12-06T10:06:39.293Z] ====================================== 00:05:06.355 [2024-12-06T10:06:39.293Z] poller_cost: 387 (cyc), 175 (nsec) 00:05:06.355 00:05:06.355 real 0m1.168s 00:05:06.355 user 0m1.098s 00:05:06.355 sys 0m0.065s 00:05:06.355 11:06:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.355 11:06:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.355 ************************************ 00:05:06.355 END TEST thread_poller_perf 00:05:06.355 ************************************ 00:05:06.355 11:06:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:06.355 00:05:06.355 real 0m2.664s 00:05:06.355 user 0m2.363s 00:05:06.355 sys 0m0.316s 00:05:06.355 11:06:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.355 11:06:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.355 ************************************ 00:05:06.355 END TEST thread 00:05:06.355 ************************************ 00:05:06.615 11:06:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:06.615 11:06:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:06.615 11:06:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.615 11:06:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.615 11:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:06.615 ************************************ 00:05:06.615 START TEST app_cmdline 00:05:06.615 ************************************ 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:06.615 * Looking for test storage... 00:05:06.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.615 11:06:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:06.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.615 --rc genhtml_branch_coverage=1 00:05:06.615 --rc genhtml_function_coverage=1 00:05:06.615 --rc genhtml_legend=1 00:05:06.615 --rc geninfo_all_blocks=1 00:05:06.615 --rc geninfo_unexecuted_blocks=1 00:05:06.615 00:05:06.615 ' 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:06.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.615 --rc genhtml_branch_coverage=1 00:05:06.615 --rc genhtml_function_coverage=1 00:05:06.615 --rc genhtml_legend=1 00:05:06.615 --rc geninfo_all_blocks=1 00:05:06.615 --rc geninfo_unexecuted_blocks=1 00:05:06.615 00:05:06.615 ' 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:06.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.615 --rc genhtml_branch_coverage=1 00:05:06.615 --rc genhtml_function_coverage=1 00:05:06.615 --rc genhtml_legend=1 00:05:06.615 --rc geninfo_all_blocks=1 00:05:06.615 --rc geninfo_unexecuted_blocks=1 00:05:06.615 00:05:06.615 ' 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:06.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.615 --rc genhtml_branch_coverage=1 00:05:06.615 --rc genhtml_function_coverage=1 00:05:06.615 --rc genhtml_legend=1 00:05:06.615 --rc geninfo_all_blocks=1 00:05:06.615 --rc geninfo_unexecuted_blocks=1 00:05:06.615 00:05:06.615 ' 00:05:06.615 11:06:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:06.615 11:06:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1525253 00:05:06.615 11:06:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:06.615 11:06:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1525253 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1525253 ']' 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.615 11:06:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:06.875 [2024-12-06 11:06:39.581916] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:05:06.875 [2024-12-06 11:06:39.581958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525253 ] 00:05:06.875 [2024-12-06 11:06:39.656727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.875 [2024-12-06 11:06:39.696782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:07.812 { 00:05:07.812 "version": "SPDK v25.01-pre git sha1 50b04b06b", 00:05:07.812 "fields": { 00:05:07.812 "major": 25, 00:05:07.812 "minor": 1, 00:05:07.812 "patch": 0, 00:05:07.812 "suffix": "-pre", 00:05:07.812 "commit": "50b04b06b" 00:05:07.812 } 00:05:07.812 } 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:07.812 11:06:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:07.812 11:06:40 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.072 request: 00:05:08.072 { 00:05:08.072 "method": "env_dpdk_get_mem_stats", 00:05:08.072 "req_id": 1 00:05:08.072 } 00:05:08.072 Got JSON-RPC error response 00:05:08.072 response: 00:05:08.072 { 00:05:08.072 "code": -32601, 00:05:08.072 "message": "Method not found" 00:05:08.072 } 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.072 11:06:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1525253 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1525253 ']' 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1525253 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1525253 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1525253' 00:05:08.072 killing process with pid 1525253 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 1525253 00:05:08.072 11:06:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 1525253 00:05:08.332 00:05:08.332 real 0m1.797s 00:05:08.332 user 0m2.131s 00:05:08.332 sys 0m0.481s 00:05:08.332 11:06:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.332 11:06:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.332 ************************************ 00:05:08.332 END TEST app_cmdline 00:05:08.332 ************************************ 00:05:08.332 11:06:41 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:08.332 11:06:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.332 11:06:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.332 11:06:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.332 ************************************ 00:05:08.332 START TEST version 00:05:08.332 ************************************ 00:05:08.332 11:06:41 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:08.592 * Looking for test storage... 00:05:08.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:08.592 11:06:41 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.592 11:06:41 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.592 11:06:41 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.592 11:06:41 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.592 11:06:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.592 11:06:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.592 11:06:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.592 11:06:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.592 11:06:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.592 11:06:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.592 11:06:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.592 11:06:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.592 11:06:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.592 11:06:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.592 11:06:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.592 11:06:41 version -- scripts/common.sh@344 -- # case "$op" in 00:05:08.592 11:06:41 version -- scripts/common.sh@345 -- # : 1 00:05:08.592 11:06:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.592 11:06:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.592 11:06:41 version -- scripts/common.sh@365 -- # decimal 1 00:05:08.592 11:06:41 version -- scripts/common.sh@353 -- # local d=1 00:05:08.592 11:06:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.592 11:06:41 version -- scripts/common.sh@355 -- # echo 1 00:05:08.592 11:06:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.592 11:06:41 version -- scripts/common.sh@366 -- # decimal 2 00:05:08.592 11:06:41 version -- scripts/common.sh@353 -- # local d=2 00:05:08.592 11:06:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.592 11:06:41 version -- scripts/common.sh@355 -- # echo 2 00:05:08.592 11:06:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.592 11:06:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.592 11:06:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.592 11:06:41 version -- scripts/common.sh@368 -- # return 0 00:05:08.592 11:06:41 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.592 11:06:41 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.592 --rc genhtml_branch_coverage=1 00:05:08.592 --rc genhtml_function_coverage=1 00:05:08.592 --rc genhtml_legend=1 00:05:08.592 --rc geninfo_all_blocks=1 00:05:08.592 --rc geninfo_unexecuted_blocks=1 00:05:08.593 00:05:08.593 ' 00:05:08.593 11:06:41 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.593 --rc genhtml_branch_coverage=1 00:05:08.593 --rc genhtml_function_coverage=1 00:05:08.593 --rc genhtml_legend=1 00:05:08.593 --rc geninfo_all_blocks=1 00:05:08.593 --rc geninfo_unexecuted_blocks=1 00:05:08.593 00:05:08.593 ' 00:05:08.593 11:06:41 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.593 --rc genhtml_branch_coverage=1 00:05:08.593 --rc genhtml_function_coverage=1 00:05:08.593 --rc genhtml_legend=1 00:05:08.593 --rc geninfo_all_blocks=1 00:05:08.593 --rc geninfo_unexecuted_blocks=1 00:05:08.593 00:05:08.593 ' 00:05:08.593 11:06:41 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.593 --rc genhtml_branch_coverage=1 00:05:08.593 --rc genhtml_function_coverage=1 00:05:08.593 --rc genhtml_legend=1 00:05:08.593 --rc geninfo_all_blocks=1 00:05:08.593 --rc geninfo_unexecuted_blocks=1 00:05:08.593 00:05:08.593 ' 00:05:08.593 11:06:41 version -- app/version.sh@17 -- # get_header_version major 00:05:08.593 11:06:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.593 11:06:41 version -- app/version.sh@14 -- # cut -f2 00:05:08.593 11:06:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.593 11:06:41 version -- app/version.sh@17 -- # major=25 00:05:08.593 11:06:41 version -- app/version.sh@18 -- # get_header_version minor 00:05:08.593 11:06:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.593 11:06:41 version -- app/version.sh@14 -- # cut -f2 00:05:08.593 11:06:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.593 11:06:41 version -- app/version.sh@18 -- # minor=1 00:05:08.593 11:06:41 version -- app/version.sh@19 -- # get_header_version patch 00:05:08.593 11:06:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.593 11:06:41 version -- app/version.sh@14 -- # cut -f2 00:05:08.593 11:06:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.593 11:06:41 version -- app/version.sh@19 -- # patch=0 00:05:08.593 11:06:41 version -- app/version.sh@20 -- # get_header_version suffix 00:05:08.593 11:06:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.593 11:06:41 version -- app/version.sh@14 -- # cut -f2 00:05:08.593 11:06:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.593 11:06:41 version -- app/version.sh@20 -- # suffix=-pre 00:05:08.593 11:06:41 version -- app/version.sh@22 -- # version=25.1 00:05:08.593 11:06:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:08.593 11:06:41 version -- app/version.sh@28 -- # version=25.1rc0 00:05:08.593 11:06:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:08.593 11:06:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:08.593 11:06:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:08.593 11:06:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:08.593 00:05:08.593 real 0m0.244s 00:05:08.593 user 0m0.154s 00:05:08.593 sys 0m0.133s 00:05:08.593 11:06:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.593 11:06:41 version -- common/autotest_common.sh@10 -- # set +x 00:05:08.593 ************************************ 00:05:08.593 END TEST version 00:05:08.593 ************************************ 00:05:08.593 11:06:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:08.593 11:06:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:08.593 11:06:41 -- spdk/autotest.sh@194 -- # uname -s 00:05:08.593 11:06:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:08.593 11:06:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:08.593 11:06:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:08.593 11:06:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:08.593 11:06:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:08.593 11:06:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:08.593 11:06:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.593 11:06:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.852 11:06:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:08.852 11:06:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:08.852 11:06:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:08.852 11:06:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:08.852 11:06:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:08.852 11:06:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:08.852 11:06:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:08.852 11:06:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:08.852 11:06:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.852 11:06:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.852 ************************************ 00:05:08.853 START TEST nvmf_tcp 00:05:08.853 ************************************ 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:08.853 * Looking for test storage... 00:05:08.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.853 11:06:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.853 --rc genhtml_branch_coverage=1 00:05:08.853 --rc genhtml_function_coverage=1 00:05:08.853 --rc genhtml_legend=1 00:05:08.853 --rc geninfo_all_blocks=1 00:05:08.853 --rc geninfo_unexecuted_blocks=1 00:05:08.853 00:05:08.853 ' 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.853 --rc genhtml_branch_coverage=1 00:05:08.853 --rc genhtml_function_coverage=1 00:05:08.853 --rc genhtml_legend=1 00:05:08.853 --rc geninfo_all_blocks=1 00:05:08.853 --rc geninfo_unexecuted_blocks=1 00:05:08.853 00:05:08.853 ' 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.853 --rc genhtml_branch_coverage=1 00:05:08.853 --rc genhtml_function_coverage=1 00:05:08.853 --rc genhtml_legend=1 00:05:08.853 --rc geninfo_all_blocks=1 00:05:08.853 --rc geninfo_unexecuted_blocks=1 00:05:08.853 00:05:08.853 ' 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.853 --rc genhtml_branch_coverage=1 00:05:08.853 --rc genhtml_function_coverage=1 00:05:08.853 --rc genhtml_legend=1 00:05:08.853 --rc geninfo_all_blocks=1 00:05:08.853 --rc geninfo_unexecuted_blocks=1 00:05:08.853 00:05:08.853 ' 00:05:08.853 11:06:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:08.853 11:06:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:08.853 11:06:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.853 11:06:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.112 ************************************ 00:05:09.112 START TEST nvmf_target_core 00:05:09.112 ************************************ 00:05:09.112 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:09.112 * Looking for test storage... 00:05:09.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:09.112 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.112 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.112 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.113 --rc genhtml_branch_coverage=1 00:05:09.113 --rc genhtml_function_coverage=1 00:05:09.113 --rc genhtml_legend=1 00:05:09.113 --rc geninfo_all_blocks=1 00:05:09.113 --rc geninfo_unexecuted_blocks=1 00:05:09.113 00:05:09.113 ' 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.113 --rc genhtml_branch_coverage=1 00:05:09.113 --rc genhtml_function_coverage=1 00:05:09.113 --rc genhtml_legend=1 00:05:09.113 --rc geninfo_all_blocks=1 00:05:09.113 --rc geninfo_unexecuted_blocks=1 00:05:09.113 00:05:09.113 ' 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.113 --rc genhtml_branch_coverage=1 00:05:09.113 --rc genhtml_function_coverage=1 00:05:09.113 --rc genhtml_legend=1 00:05:09.113 --rc geninfo_all_blocks=1 00:05:09.113 --rc geninfo_unexecuted_blocks=1 00:05:09.113 00:05:09.113 ' 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.113 --rc genhtml_branch_coverage=1 00:05:09.113 --rc genhtml_function_coverage=1 00:05:09.113 --rc genhtml_legend=1 00:05:09.113 --rc geninfo_all_blocks=1 00:05:09.113 --rc geninfo_unexecuted_blocks=1 00:05:09.113 00:05:09.113 ' 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.113 11:06:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:09.113 ************************************ 00:05:09.113 START TEST nvmf_abort 00:05:09.113 ************************************ 00:05:09.113 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:09.373 * Looking for test storage... 00:05:09.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.373 --rc genhtml_branch_coverage=1 00:05:09.373 --rc genhtml_function_coverage=1 00:05:09.373 --rc genhtml_legend=1 00:05:09.373 --rc geninfo_all_blocks=1 00:05:09.373 --rc geninfo_unexecuted_blocks=1 00:05:09.373 00:05:09.373 ' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.374 --rc genhtml_branch_coverage=1 00:05:09.374 --rc genhtml_function_coverage=1 00:05:09.374 --rc genhtml_legend=1 00:05:09.374 --rc geninfo_all_blocks=1 00:05:09.374 --rc geninfo_unexecuted_blocks=1 00:05:09.374 00:05:09.374 ' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.374 --rc genhtml_branch_coverage=1 00:05:09.374 --rc genhtml_function_coverage=1 00:05:09.374 --rc genhtml_legend=1 00:05:09.374 --rc geninfo_all_blocks=1 00:05:09.374 --rc geninfo_unexecuted_blocks=1 00:05:09.374 00:05:09.374 ' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.374 --rc genhtml_branch_coverage=1 00:05:09.374 --rc genhtml_function_coverage=1 00:05:09.374 --rc genhtml_legend=1 00:05:09.374 --rc geninfo_all_blocks=1 00:05:09.374 --rc geninfo_unexecuted_blocks=1 00:05:09.374 00:05:09.374 ' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:09.374 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:15.940 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:15.941 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:15.941 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:15.941 Found net devices under 0000:af:00.0: cvl_0_0 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:15.941 Found net devices under 0000:af:00.1: cvl_0_1 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:15.941 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:15.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:15.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:05:15.941 00:05:15.941 --- 10.0.0.2 ping statistics --- 00:05:15.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:15.941 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:15.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:15.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:05:15.941 00:05:15.941 --- 10.0.0.1 ping statistics --- 00:05:15.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:15.941 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.941 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1529145 00:05:15.942 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1529145 00:05:15.942 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:15.942 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1529145 ']' 00:05:15.942 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.942 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.942 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.942 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.942 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.942 [2024-12-06 11:06:48.344152] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:05:15.942 [2024-12-06 11:06:48.344196] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:15.942 [2024-12-06 11:06:48.419903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.942 [2024-12-06 11:06:48.460456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:15.942 [2024-12-06 11:06:48.460489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:15.942 [2024-12-06 11:06:48.460498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.942 [2024-12-06 11:06:48.460503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.942 [2024-12-06 11:06:48.460508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:15.942 [2024-12-06 11:06:48.461861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.942 [2024-12-06 11:06:48.461979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.942 [2024-12-06 11:06:48.461980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.509 [2024-12-06 11:06:49.195089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.509 Malloc0 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.509 Delay0 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.509 [2024-12-06 11:06:49.264829] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:16.509 [2024-12-06 11:06:49.401374] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:19.043 Initializing NVMe Controllers 00:05:19.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:19.043 controller IO queue size 128 less than required 00:05:19.043 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:19.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:19.043 Initialization complete. Launching workers. 00:05:19.043 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 39500 00:05:19.043 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39561, failed to submit 62 00:05:19.043 success 39504, unsuccessful 57, failed 0 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:19.043 rmmod nvme_tcp 00:05:19.043 rmmod nvme_fabrics 00:05:19.043 rmmod nvme_keyring 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1529145 ']' 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1529145 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1529145 ']' 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1529145 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1529145 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1529145' 00:05:19.043 killing process with pid 1529145 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1529145 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1529145 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:19.043 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.581 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:21.581 00:05:21.581 real 0m11.866s 00:05:21.581 user 0m13.662s 00:05:21.581 sys 0m5.519s 00:05:21.581 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.581 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.581 ************************************ 00:05:21.581 END TEST nvmf_abort 00:05:21.581 ************************************ 00:05:21.581 11:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.581 11:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:21.581 11:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.581 11:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:21.581 ************************************ 00:05:21.581 START TEST nvmf_ns_hotplug_stress 00:05:21.581 ************************************ 00:05:21.581 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.581 * Looking for test storage... 00:05:21.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.581 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.582 --rc genhtml_branch_coverage=1 00:05:21.582 --rc genhtml_function_coverage=1 00:05:21.582 --rc genhtml_legend=1 00:05:21.582 --rc geninfo_all_blocks=1 00:05:21.582 --rc geninfo_unexecuted_blocks=1 00:05:21.582 00:05:21.582 ' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.582 --rc genhtml_branch_coverage=1 00:05:21.582 --rc genhtml_function_coverage=1 00:05:21.582 --rc genhtml_legend=1 00:05:21.582 --rc geninfo_all_blocks=1 00:05:21.582 --rc geninfo_unexecuted_blocks=1 00:05:21.582 00:05:21.582 ' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.582 --rc genhtml_branch_coverage=1 00:05:21.582 --rc genhtml_function_coverage=1 00:05:21.582 --rc genhtml_legend=1 00:05:21.582 --rc geninfo_all_blocks=1 00:05:21.582 --rc geninfo_unexecuted_blocks=1 00:05:21.582 00:05:21.582 ' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.582 --rc genhtml_branch_coverage=1 00:05:21.582 --rc genhtml_function_coverage=1 00:05:21.582 --rc genhtml_legend=1 00:05:21.582 --rc geninfo_all_blocks=1 00:05:21.582 --rc geninfo_unexecuted_blocks=1 00:05:21.582 00:05:21.582 ' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:21.582 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:21.583 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:28.151 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:28.151 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:28.151 Found net devices under 0000:af:00.0: cvl_0_0 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:28.151 Found net devices under 0000:af:00.1: cvl_0_1 00:05:28.151 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:28.152 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:28.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:05:28.152 00:05:28.152 --- 10.0.0.2 ping statistics --- 00:05:28.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.152 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:28.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:05:28.152 00:05:28.152 --- 10.0.0.1 ping statistics --- 00:05:28.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.152 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1533443 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1533443 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1533443 ']' 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.152 [2024-12-06 11:07:00.290652] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:05:28.152 [2024-12-06 11:07:00.290698] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:28.152 [2024-12-06 11:07:00.366589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.152 [2024-12-06 11:07:00.405629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:28.152 [2024-12-06 11:07:00.405663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:28.152 [2024-12-06 11:07:00.405670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.152 [2024-12-06 11:07:00.405675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.152 [2024-12-06 11:07:00.405680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:28.152 [2024-12-06 11:07:00.407066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.152 [2024-12-06 11:07:00.407169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.152 [2024-12-06 11:07:00.407170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:28.152 [2024-12-06 11:07:00.711516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:28.152 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:28.152 [2024-12-06 11:07:01.072830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:28.413 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:28.413 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:28.672 Malloc0 00:05:28.672 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:28.931 Delay0 00:05:28.931 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.931 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:29.189 NULL1 00:05:29.190 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:29.448 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:29.448 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1533984 00:05:29.448 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:29.448 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.706 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.706 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:29.706 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:29.965 true 00:05:29.965 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:29.965 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.223 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.482 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:30.482 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:30.482 true 00:05:30.482 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:30.482 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.740 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.998 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:30.998 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:31.262 true 00:05:31.262 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:31.262 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.262 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.521 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:31.521 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:31.779 true 00:05:31.779 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:31.779 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.037 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.296 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:32.296 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:32.296 true 00:05:32.296 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:32.296 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.553 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.810 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:32.810 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:33.067 true 00:05:33.067 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:33.067 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.067 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.325 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:33.325 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:33.584 true 00:05:33.584 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:33.584 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.903 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.903 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:33.903 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:34.206 true 00:05:34.206 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:34.206 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.511 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.511 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:34.511 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:34.770 true 00:05:34.770 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:34.770 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.029 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.029 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:35.029 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:35.289 true 00:05:35.289 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:35.289 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.548 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.807 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:35.807 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:35.807 true 00:05:36.065 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:36.065 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.065 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.325 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:36.325 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:36.583 true 00:05:36.583 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:36.583 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.843 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.843 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:36.843 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:37.102 true 00:05:37.102 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:37.102 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.361 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.620 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:37.620 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:37.620 true 00:05:37.878 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:37.878 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.878 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.137 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:38.137 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:38.395 true 00:05:38.395 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:38.395 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.654 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.654 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:38.654 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:38.914 true 00:05:38.914 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:38.914 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.173 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.432 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:39.432 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:39.432 true 00:05:39.691 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:39.691 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.691 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.951 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:39.951 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:40.210 true 00:05:40.210 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:40.210 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.470 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.470 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:40.470 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:40.729 true 00:05:40.729 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:40.729 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.988 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.246 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:41.246 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:41.246 true 00:05:41.504 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:41.504 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.504 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.763 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:41.763 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:42.021 true 00:05:42.021 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:42.021 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.280 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.280 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:42.280 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:42.538 true 00:05:42.538 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:42.538 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.796 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.054 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:43.054 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:43.312 true 00:05:43.312 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:43.312 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.312 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.570 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:43.570 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:43.828 true 00:05:43.828 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:43.829 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.087 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.347 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:44.347 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:44.347 true 00:05:44.347 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:44.347 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.606 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.865 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:44.865 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:45.124 true 00:05:45.124 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:45.124 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.124 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.383 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:45.383 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:45.641 true 00:05:45.641 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:45.641 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.900 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.900 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:45.900 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:46.159 true 00:05:46.159 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:46.159 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.418 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.676 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:46.676 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:46.676 true 00:05:46.936 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:46.936 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.936 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.195 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:47.195 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:47.454 true 00:05:47.454 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:47.454 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.712 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.712 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:47.712 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:47.971 true 00:05:47.971 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:47.971 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.230 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.489 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:48.489 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:48.489 true 00:05:48.489 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:48.489 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.746 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.004 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:49.004 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:49.262 true 00:05:49.262 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:49.262 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.521 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.521 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:49.521 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:49.779 true 00:05:49.779 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:49.779 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.083 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.083 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:50.342 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:50.342 true 00:05:50.342 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:50.342 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.600 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.858 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:50.858 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:51.116 true 00:05:51.116 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:51.116 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.116 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.374 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:51.374 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:51.632 true 00:05:51.632 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:51.632 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.890 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.146 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:52.146 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:52.146 true 00:05:52.146 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:52.146 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.403 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.661 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:52.661 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:52.919 true 00:05:52.919 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:52.919 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.177 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.177 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:53.177 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:53.436 true 00:05:53.436 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:53.436 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.694 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.953 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:53.953 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:53.953 true 00:05:53.953 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:53.953 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.210 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.467 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:54.467 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:54.725 true 00:05:54.725 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:54.725 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.984 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.984 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:54.984 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:55.242 true 00:05:55.242 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:55.242 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.501 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.760 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:55.760 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:55.760 true 00:05:56.019 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:56.019 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.019 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.278 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:56.278 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:56.536 true 00:05:56.536 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:56.536 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.796 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.796 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:56.796 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:57.055 true 00:05:57.055 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:57.055 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.332 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.691 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:57.691 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:57.691 true 00:05:57.691 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:57.691 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.950 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.208 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:58.209 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:58.209 true 00:05:58.209 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:58.209 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.467 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.726 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:58.726 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:58.983 true 00:05:58.983 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:58.983 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.983 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.241 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:59.241 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:59.499 true 00:05:59.499 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:05:59.499 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.759 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.759 Initializing NVMe Controllers 00:05:59.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:59.759 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:05:59.759 Controller IO queue size 128, less than required. 00:05:59.759 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:59.759 WARNING: Some requested NVMe devices were skipped 00:05:59.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:59.759 Initialization complete. Launching workers. 00:05:59.759 ======================================================== 00:05:59.759 Latency(us) 00:05:59.759 Device Information : IOPS MiB/s Average min max 00:05:59.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29629.18 14.47 4320.04 1961.41 9135.50 00:05:59.759 ======================================================== 00:05:59.759 Total : 29629.18 14.47 4320.04 1961.41 9135.50 00:05:59.759 00:05:59.759 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:59.759 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:00.018 true 00:06:00.018 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1533984 00:06:00.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1533984) - No such process 00:06:00.019 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1533984 00:06:00.019 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.277 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.536 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:00.536 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:00.536 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:00.536 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.536 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:00.536 null0 00:06:00.536 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.536 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.536 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:00.795 null1 00:06:00.795 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.795 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.795 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:01.055 null2 00:06:01.055 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.055 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.055 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:01.055 null3 00:06:01.055 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.055 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.055 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:01.314 null4 00:06:01.314 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.314 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.314 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:01.573 null5 00:06:01.573 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.573 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.573 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:01.573 null6 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:01.833 null7 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1539946 1539947 1539949 1539950 1539953 1539955 1539956 1539958 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:01.833 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.834 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.834 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.093 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.093 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.093 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.093 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.093 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.093 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.093 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.093 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.362 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.363 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.363 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.363 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.363 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.363 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.363 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.363 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.363 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.363 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.364 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.364 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.364 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.364 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.364 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.364 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.624 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.882 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.882 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.882 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.882 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.882 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.882 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.882 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.882 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.141 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.142 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.142 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.142 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.142 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.142 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.142 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.142 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.406 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.663 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.663 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.663 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.663 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.663 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.663 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.664 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.664 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.921 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.922 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.180 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.180 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.180 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.180 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.180 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.439 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.439 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.439 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.439 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.439 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.439 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.439 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.439 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.697 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.956 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.214 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.214 11:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.214 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.214 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.214 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.214 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.214 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.214 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.214 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.214 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.215 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.474 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:05.733 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:05.734 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:05.734 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:05.734 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:05.734 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:05.734 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:05.734 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:05.734 rmmod nvme_tcp 00:06:05.734 rmmod nvme_fabrics 00:06:05.734 rmmod nvme_keyring 00:06:05.734 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1533443 ']' 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1533443 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1533443 ']' 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1533443 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533443 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533443' 00:06:05.997 killing process with pid 1533443 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1533443 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1533443 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.997 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.524 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:08.524 00:06:08.524 real 0m46.980s 00:06:08.524 user 3m17.301s 00:06:08.524 sys 0m16.940s 00:06:08.524 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.524 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:08.524 ************************************ 00:06:08.524 END TEST nvmf_ns_hotplug_stress 00:06:08.525 ************************************ 00:06:08.525 11:07:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.525 11:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:08.525 11:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.525 11:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.525 ************************************ 00:06:08.525 START TEST nvmf_delete_subsystem 00:06:08.525 ************************************ 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.525 * Looking for test storage... 00:06:08.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.525 --rc genhtml_branch_coverage=1 00:06:08.525 --rc genhtml_function_coverage=1 00:06:08.525 --rc genhtml_legend=1 00:06:08.525 --rc geninfo_all_blocks=1 00:06:08.525 --rc geninfo_unexecuted_blocks=1 00:06:08.525 00:06:08.525 ' 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.525 --rc genhtml_branch_coverage=1 00:06:08.525 --rc genhtml_function_coverage=1 00:06:08.525 --rc genhtml_legend=1 00:06:08.525 --rc geninfo_all_blocks=1 00:06:08.525 --rc geninfo_unexecuted_blocks=1 00:06:08.525 00:06:08.525 ' 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.525 --rc genhtml_branch_coverage=1 00:06:08.525 --rc genhtml_function_coverage=1 00:06:08.525 --rc genhtml_legend=1 00:06:08.525 --rc geninfo_all_blocks=1 00:06:08.525 --rc geninfo_unexecuted_blocks=1 00:06:08.525 00:06:08.525 ' 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.525 --rc genhtml_branch_coverage=1 00:06:08.525 --rc genhtml_function_coverage=1 00:06:08.525 --rc genhtml_legend=1 00:06:08.525 --rc geninfo_all_blocks=1 00:06:08.525 --rc geninfo_unexecuted_blocks=1 00:06:08.525 00:06:08.525 ' 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.525 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:08.526 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:15.091 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:15.091 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:15.091 Found net devices under 0000:af:00.0: cvl_0_0 00:06:15.091 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:15.092 Found net devices under 0000:af:00.1: cvl_0_1 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.092 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:15.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:06:15.092 00:06:15.092 --- 10.0.0.2 ping statistics --- 00:06:15.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.092 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:06:15.092 00:06:15.092 --- 10.0.0.1 ping statistics --- 00:06:15.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.092 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1544630 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1544630 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1544630 ']' 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.092 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.092 [2024-12-06 11:07:47.359330] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:06:15.092 [2024-12-06 11:07:47.359378] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.092 [2024-12-06 11:07:47.437249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.092 [2024-12-06 11:07:47.475648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.092 [2024-12-06 11:07:47.475681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.092 [2024-12-06 11:07:47.475687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.092 [2024-12-06 11:07:47.475693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.092 [2024-12-06 11:07:47.475698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.092 [2024-12-06 11:07:47.476819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.092 [2024-12-06 11:07:47.476821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 [2024-12-06 11:07:48.206429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 [2024-12-06 11:07:48.226611] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 NULL1 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 Delay0 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1544797 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:15.352 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:15.611 [2024-12-06 11:07:48.337443] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:17.514 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:17.514 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.514 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 [2024-12-06 11:07:50.414884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a990e0 is same with the state(6) to be set 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 [2024-12-06 11:07:50.415346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98f00 is same with the state(6) to be set 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Write completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 starting I/O failed: -6 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.514 Read completed with error (sct=0, sc=8) 00:06:17.515 Write completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 starting I/O failed: -6 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Write completed with error (sct=0, sc=8) 00:06:17.515 starting I/O failed: -6 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 starting I/O failed: -6 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Write completed with error (sct=0, sc=8) 00:06:17.515 starting I/O failed: -6 00:06:17.515 Write completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Write completed with error (sct=0, sc=8) 00:06:17.515 starting I/O failed: -6 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 starting I/O failed: -6 00:06:17.515 Read completed with error (sct=0, sc=8) 00:06:17.515 [2024-12-06 11:07:50.416040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f03f400d4d0 is same with the state(6) to be set 00:06:18.892 [2024-12-06 11:07:51.390565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9a5f0 is same with the state(6) to be set 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.892 Write completed with error (sct=0, sc=8) 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.892 Write completed with error (sct=0, sc=8) 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.892 Write completed with error (sct=0, sc=8) 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.892 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 [2024-12-06 11:07:51.418084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a992c0 is same with the state(6) to be set 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 [2024-12-06 11:07:51.418597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f03f4000c40 is same with the state(6) to be set 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 [2024-12-06 11:07:51.418726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f03f400d800 is same with the state(6) to be set 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Write completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 Read completed with error (sct=0, sc=8) 00:06:18.893 [2024-12-06 11:07:51.419359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f03f400d020 is same with the state(6) to be set 00:06:18.893 Initializing NVMe Controllers 00:06:18.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:18.893 Controller IO queue size 128, less than required. 00:06:18.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:18.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:18.893 Initialization complete. Launching workers. 00:06:18.893 ======================================================== 00:06:18.893 Latency(us) 00:06:18.893 Device Information : IOPS MiB/s Average min max 00:06:18.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.65 0.08 876255.45 292.10 1008707.95 00:06:18.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.59 0.08 1048878.23 391.97 2001266.02 00:06:18.893 ======================================================== 00:06:18.893 Total : 321.24 0.16 965239.02 292.10 2001266.02 00:06:18.893 00:06:18.893 [2024-12-06 11:07:51.419919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9a5f0 (9): Bad file descriptor 00:06:18.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:18.893 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.893 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:18.893 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1544797 00:06:18.893 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1544797 00:06:19.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1544797) - No such process 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1544797 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1544797 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1544797 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.152 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.152 [2024-12-06 11:07:51.947604] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1545450 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1545450 00:06:19.153 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.153 [2024-12-06 11:07:52.039562] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:19.720 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.720 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1545450 00:06:19.720 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.286 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.286 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1545450 00:06:20.286 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.544 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.544 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1545450 00:06:20.544 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.112 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.112 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1545450 00:06:21.112 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.683 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.683 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1545450 00:06:21.683 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.251 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.251 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1545450 00:06:22.251 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.510 Initializing NVMe Controllers 00:06:22.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:22.510 Controller IO queue size 128, less than required. 00:06:22.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:22.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:22.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:22.510 Initialization complete. Launching workers. 00:06:22.510 ======================================================== 00:06:22.510 Latency(us) 00:06:22.510 Device Information : IOPS MiB/s Average min max 00:06:22.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001859.38 1000116.89 1005814.95 00:06:22.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003815.15 1000102.56 1041885.74 00:06:22.510 ======================================================== 00:06:22.510 Total : 256.00 0.12 1002837.26 1000102.56 1041885.74 00:06:22.510 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1545450 00:06:22.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1545450) - No such process 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1545450 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:22.786 rmmod nvme_tcp 00:06:22.786 rmmod nvme_fabrics 00:06:22.786 rmmod nvme_keyring 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1544630 ']' 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1544630 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1544630 ']' 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1544630 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:22.786 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.787 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1544630 00:06:22.787 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.787 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.787 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1544630' 00:06:22.787 killing process with pid 1544630 00:06:22.787 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1544630 00:06:22.787 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1544630 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.051 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.956 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:24.956 00:06:24.956 real 0m16.812s 00:06:24.956 user 0m30.555s 00:06:24.956 sys 0m5.495s 00:06:24.956 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.956 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.956 ************************************ 00:06:24.956 END TEST nvmf_delete_subsystem 00:06:24.956 ************************************ 00:06:24.956 11:07:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:24.956 11:07:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.956 11:07:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.956 11:07:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.216 ************************************ 00:06:25.216 START TEST nvmf_host_management 00:06:25.216 ************************************ 00:06:25.216 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:25.216 * Looking for test storage... 00:06:25.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.216 --rc genhtml_branch_coverage=1 00:06:25.216 --rc genhtml_function_coverage=1 00:06:25.216 --rc genhtml_legend=1 00:06:25.216 --rc geninfo_all_blocks=1 00:06:25.216 --rc geninfo_unexecuted_blocks=1 00:06:25.216 00:06:25.216 ' 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.216 --rc genhtml_branch_coverage=1 00:06:25.216 --rc genhtml_function_coverage=1 00:06:25.216 --rc genhtml_legend=1 00:06:25.216 --rc geninfo_all_blocks=1 00:06:25.216 --rc geninfo_unexecuted_blocks=1 00:06:25.216 00:06:25.216 ' 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.216 --rc genhtml_branch_coverage=1 00:06:25.216 --rc genhtml_function_coverage=1 00:06:25.216 --rc genhtml_legend=1 00:06:25.216 --rc geninfo_all_blocks=1 00:06:25.216 --rc geninfo_unexecuted_blocks=1 00:06:25.216 00:06:25.216 ' 00:06:25.216 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.217 --rc genhtml_branch_coverage=1 00:06:25.217 --rc genhtml_function_coverage=1 00:06:25.217 --rc genhtml_legend=1 00:06:25.217 --rc geninfo_all_blocks=1 00:06:25.217 --rc geninfo_unexecuted_blocks=1 00:06:25.217 00:06:25.217 ' 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:25.217 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.786 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.786 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:31.786 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:31.786 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:31.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:31.787 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:31.787 Found net devices under 0000:af:00.0: cvl_0_0 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:31.787 Found net devices under 0000:af:00.1: cvl_0_1 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.787 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.787 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.787 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.787 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:31.787 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.787 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.787 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.787 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:31.787 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:31.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:06:31.787 00:06:31.787 --- 10.0.0.2 ping statistics --- 00:06:31.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.787 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:06:31.787 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:06:31.788 00:06:31.788 --- 10.0.0.1 ping statistics --- 00:06:31.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.788 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1549768 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1549768 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1549768 ']' 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.788 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.788 [2024-12-06 11:08:04.262363] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:06:31.788 [2024-12-06 11:08:04.262409] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.788 [2024-12-06 11:08:04.339879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.788 [2024-12-06 11:08:04.380362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.788 [2024-12-06 11:08:04.380399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.788 [2024-12-06 11:08:04.380405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.788 [2024-12-06 11:08:04.380411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.788 [2024-12-06 11:08:04.380416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.788 [2024-12-06 11:08:04.382085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.788 [2024-12-06 11:08:04.382200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.788 [2024-12-06 11:08:04.382312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.788 [2024-12-06 11:08:04.382313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.362 [2024-12-06 11:08:05.120716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.362 Malloc0 00:06:32.362 [2024-12-06 11:08:05.197359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1550032 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1550032 /var/tmp/bdevperf.sock 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1550032 ']' 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:32.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:32.362 { 00:06:32.362 "params": { 00:06:32.362 "name": "Nvme$subsystem", 00:06:32.362 "trtype": "$TEST_TRANSPORT", 00:06:32.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:32.362 "adrfam": "ipv4", 00:06:32.362 "trsvcid": "$NVMF_PORT", 00:06:32.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:32.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:32.362 "hdgst": ${hdgst:-false}, 00:06:32.362 "ddgst": ${ddgst:-false} 00:06:32.362 }, 00:06:32.362 "method": "bdev_nvme_attach_controller" 00:06:32.362 } 00:06:32.362 EOF 00:06:32.362 )") 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:32.362 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:32.362 "params": { 00:06:32.362 "name": "Nvme0", 00:06:32.362 "trtype": "tcp", 00:06:32.362 "traddr": "10.0.0.2", 00:06:32.362 "adrfam": "ipv4", 00:06:32.362 "trsvcid": "4420", 00:06:32.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:32.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:32.362 "hdgst": false, 00:06:32.362 "ddgst": false 00:06:32.362 }, 00:06:32.362 "method": "bdev_nvme_attach_controller" 00:06:32.362 }' 00:06:32.362 [2024-12-06 11:08:05.292622] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:06:32.362 [2024-12-06 11:08:05.292665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550032 ] 00:06:32.620 [2024-12-06 11:08:05.363361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.620 [2024-12-06 11:08:05.401278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.877 Running I/O for 10 seconds... 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1025 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1025 -ge 100 ']' 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.446 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.446 [2024-12-06 11:08:06.168826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.168861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.168876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.168884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.168892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.168900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.168908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.168914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.168922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.168928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.168935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.168942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.168949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.168955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.168963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.168968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.168975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.168981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.168994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.446 [2024-12-06 11:08:06.169235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.446 [2024-12-06 11:08:06.169242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.169740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.447 [2024-12-06 11:08:06.169746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.447 [2024-12-06 11:08:06.170609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:33.447 task offset: 8960 on job bdev=Nvme0n1 fails 00:06:33.447 00:06:33.447 Latency(us) 00:06:33.447 [2024-12-06T10:08:06.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.447 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:33.447 Job: Nvme0n1 ended in about 0.52 seconds with error 00:06:33.447 Verification LBA range: start 0x0 length 0x400 00:06:33.447 Nvme0n1 : 0.52 2072.65 129.54 121.92 0.00 28557.79 1437.32 24903.68 00:06:33.447 [2024-12-06T10:08:06.386Z] =================================================================================================================== 00:06:33.448 [2024-12-06T10:08:06.386Z] Total : 2072.65 129.54 121.92 0.00 28557.79 1437.32 24903.68 00:06:33.448 [2024-12-06 11:08:06.172807] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.448 [2024-12-06 11:08:06.172829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1346630 (9): Bad file descriptor 00:06:33.448 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.448 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:33.448 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.448 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.448 [2024-12-06 11:08:06.180280] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:33.448 [2024-12-06 11:08:06.180368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:33.448 [2024-12-06 11:08:06.180389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.448 [2024-12-06 11:08:06.180405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:33.448 [2024-12-06 11:08:06.180412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:33.448 [2024-12-06 11:08:06.180422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:33.448 [2024-12-06 11:08:06.180428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1346630 00:06:33.448 [2024-12-06 11:08:06.180446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1346630 (9): Bad file descriptor 00:06:33.448 [2024-12-06 11:08:06.180458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:33.448 [2024-12-06 11:08:06.180464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:33.448 [2024-12-06 11:08:06.180472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:33.448 [2024-12-06 11:08:06.180480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:33.448 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.448 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1550032 00:06:34.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1550032) - No such process 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:34.383 { 00:06:34.383 "params": { 00:06:34.383 "name": "Nvme$subsystem", 00:06:34.383 "trtype": "$TEST_TRANSPORT", 00:06:34.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:34.383 "adrfam": "ipv4", 00:06:34.383 "trsvcid": "$NVMF_PORT", 00:06:34.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:34.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:34.383 "hdgst": ${hdgst:-false}, 00:06:34.383 "ddgst": ${ddgst:-false} 00:06:34.383 }, 00:06:34.383 "method": "bdev_nvme_attach_controller" 00:06:34.383 } 00:06:34.383 EOF 00:06:34.383 )") 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:34.383 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:34.384 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:34.384 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:34.384 "params": { 00:06:34.384 "name": "Nvme0", 00:06:34.384 "trtype": "tcp", 00:06:34.384 "traddr": "10.0.0.2", 00:06:34.384 "adrfam": "ipv4", 00:06:34.384 "trsvcid": "4420", 00:06:34.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:34.384 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:34.384 "hdgst": false, 00:06:34.384 "ddgst": false 00:06:34.384 }, 00:06:34.384 "method": "bdev_nvme_attach_controller" 00:06:34.384 }' 00:06:34.384 [2024-12-06 11:08:07.238559] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:06:34.384 [2024-12-06 11:08:07.238602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550393 ] 00:06:34.384 [2024-12-06 11:08:07.311699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.643 [2024-12-06 11:08:07.347610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.643 Running I/O for 1 seconds... 00:06:36.024 2112.00 IOPS, 132.00 MiB/s 00:06:36.024 Latency(us) 00:06:36.024 [2024-12-06T10:08:08.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:36.024 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:36.024 Verification LBA range: start 0x0 length 0x400 00:06:36.024 Nvme0n1 : 1.01 2165.15 135.32 0.00 0.00 29124.13 7447.27 25141.99 00:06:36.024 [2024-12-06T10:08:08.962Z] =================================================================================================================== 00:06:36.024 [2024-12-06T10:08:08.962Z] Total : 2165.15 135.32 0.00 0.00 29124.13 7447.27 25141.99 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:36.024 rmmod nvme_tcp 00:06:36.024 rmmod nvme_fabrics 00:06:36.024 rmmod nvme_keyring 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1549768 ']' 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1549768 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1549768 ']' 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1549768 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1549768 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1549768' 00:06:36.024 killing process with pid 1549768 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1549768 00:06:36.024 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1549768 00:06:36.283 [2024-12-06 11:08:09.008204] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.283 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.240 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:38.240 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:38.240 00:06:38.240 real 0m13.188s 00:06:38.240 user 0m22.753s 00:06:38.240 sys 0m5.722s 00:06:38.240 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.240 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.240 ************************************ 00:06:38.240 END TEST nvmf_host_management 00:06:38.240 ************************************ 00:06:38.240 11:08:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:38.240 11:08:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.240 11:08:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.240 11:08:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.551 ************************************ 00:06:38.551 START TEST nvmf_lvol 00:06:38.551 ************************************ 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:38.551 * Looking for test storage... 00:06:38.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.551 --rc genhtml_branch_coverage=1 00:06:38.551 --rc genhtml_function_coverage=1 00:06:38.551 --rc genhtml_legend=1 00:06:38.551 --rc geninfo_all_blocks=1 00:06:38.551 --rc geninfo_unexecuted_blocks=1 00:06:38.551 00:06:38.551 ' 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.551 --rc genhtml_branch_coverage=1 00:06:38.551 --rc genhtml_function_coverage=1 00:06:38.551 --rc genhtml_legend=1 00:06:38.551 --rc geninfo_all_blocks=1 00:06:38.551 --rc geninfo_unexecuted_blocks=1 00:06:38.551 00:06:38.551 ' 00:06:38.551 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.551 --rc genhtml_branch_coverage=1 00:06:38.551 --rc genhtml_function_coverage=1 00:06:38.552 --rc genhtml_legend=1 00:06:38.552 --rc geninfo_all_blocks=1 00:06:38.552 --rc geninfo_unexecuted_blocks=1 00:06:38.552 00:06:38.552 ' 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.552 --rc genhtml_branch_coverage=1 00:06:38.552 --rc genhtml_function_coverage=1 00:06:38.552 --rc genhtml_legend=1 00:06:38.552 --rc geninfo_all_blocks=1 00:06:38.552 --rc geninfo_unexecuted_blocks=1 00:06:38.552 00:06:38.552 ' 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.552 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:45.123 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:45.123 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:45.123 Found net devices under 0000:af:00.0: cvl_0_0 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:45.123 Found net devices under 0000:af:00.1: cvl_0_1 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:45.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:06:45.123 00:06:45.123 --- 10.0.0.2 ping statistics --- 00:06:45.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.123 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:06:45.123 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:06:45.123 00:06:45.123 --- 10.0.0.1 ping statistics --- 00:06:45.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.124 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1554401 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1554401 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1554401 ']' 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.124 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.124 [2024-12-06 11:08:17.564208] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:06:45.124 [2024-12-06 11:08:17.564257] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.124 [2024-12-06 11:08:17.640641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.124 [2024-12-06 11:08:17.679465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.124 [2024-12-06 11:08:17.679501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.124 [2024-12-06 11:08:17.679508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.124 [2024-12-06 11:08:17.679515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.124 [2024-12-06 11:08:17.679520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.124 [2024-12-06 11:08:17.680789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.124 [2024-12-06 11:08:17.680903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.124 [2024-12-06 11:08:17.680904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.693 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.693 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:45.693 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:45.693 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.693 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.693 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.693 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:45.693 [2024-12-06 11:08:18.566271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.693 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:45.952 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:45.952 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.211 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:46.211 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:46.470 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:46.471 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0889139b-a1ec-4897-a905-02ed48fdbbb1 00:06:46.471 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0889139b-a1ec-4897-a905-02ed48fdbbb1 lvol 20 00:06:46.729 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f94bf61f-995c-4559-9814-9c170431fb07 00:06:46.729 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:46.987 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f94bf61f-995c-4559-9814-9c170431fb07 00:06:47.246 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:47.246 [2024-12-06 11:08:20.114090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.246 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.505 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1554932 00:06:47.505 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:47.505 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:48.444 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f94bf61f-995c-4559-9814-9c170431fb07 MY_SNAPSHOT 00:06:48.703 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ecef60cf-15b1-4a95-a60e-cb99cecfb9d4 00:06:48.703 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f94bf61f-995c-4559-9814-9c170431fb07 30 00:06:48.961 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ecef60cf-15b1-4a95-a60e-cb99cecfb9d4 MY_CLONE 00:06:49.219 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0495272c-089e-498f-b5d2-ce247bd8e8f6 00:06:49.219 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0495272c-089e-498f-b5d2-ce247bd8e8f6 00:06:49.785 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1554932 00:06:57.915 Initializing NVMe Controllers 00:06:57.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:57.915 Controller IO queue size 128, less than required. 00:06:57.915 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:57.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:57.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:57.915 Initialization complete. Launching workers. 00:06:57.915 ======================================================== 00:06:57.915 Latency(us) 00:06:57.915 Device Information : IOPS MiB/s Average min max 00:06:57.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12737.00 49.75 10051.03 1448.64 51372.44 00:06:57.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12786.10 49.95 10012.98 3562.71 56900.46 00:06:57.915 ======================================================== 00:06:57.915 Total : 25523.10 99.70 10031.96 1448.64 56900.46 00:06:57.915 00:06:57.915 11:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:58.174 11:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f94bf61f-995c-4559-9814-9c170431fb07 00:06:58.174 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0889139b-a1ec-4897-a905-02ed48fdbbb1 00:06:58.433 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:58.433 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:58.433 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:58.433 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:58.433 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:58.433 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:58.433 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:58.433 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:58.433 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:58.433 rmmod nvme_tcp 00:06:58.433 rmmod nvme_fabrics 00:06:58.433 rmmod nvme_keyring 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1554401 ']' 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1554401 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1554401 ']' 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1554401 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.434 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1554401 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1554401' 00:06:58.693 killing process with pid 1554401 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1554401 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1554401 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.693 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:01.231 00:07:01.231 real 0m22.482s 00:07:01.231 user 1m4.279s 00:07:01.231 sys 0m7.643s 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.231 ************************************ 00:07:01.231 END TEST nvmf_lvol 00:07:01.231 ************************************ 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.231 ************************************ 00:07:01.231 START TEST nvmf_lvs_grow 00:07:01.231 ************************************ 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:01.231 * Looking for test storage... 00:07:01.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.231 --rc genhtml_branch_coverage=1 00:07:01.231 --rc genhtml_function_coverage=1 00:07:01.231 --rc genhtml_legend=1 00:07:01.231 --rc geninfo_all_blocks=1 00:07:01.231 --rc geninfo_unexecuted_blocks=1 00:07:01.231 00:07:01.231 ' 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.231 --rc genhtml_branch_coverage=1 00:07:01.231 --rc genhtml_function_coverage=1 00:07:01.231 --rc genhtml_legend=1 00:07:01.231 --rc geninfo_all_blocks=1 00:07:01.231 --rc geninfo_unexecuted_blocks=1 00:07:01.231 00:07:01.231 ' 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.231 --rc genhtml_branch_coverage=1 00:07:01.231 --rc genhtml_function_coverage=1 00:07:01.231 --rc genhtml_legend=1 00:07:01.231 --rc geninfo_all_blocks=1 00:07:01.231 --rc geninfo_unexecuted_blocks=1 00:07:01.231 00:07:01.231 ' 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.231 --rc genhtml_branch_coverage=1 00:07:01.231 --rc genhtml_function_coverage=1 00:07:01.231 --rc genhtml_legend=1 00:07:01.231 --rc geninfo_all_blocks=1 00:07:01.231 --rc geninfo_unexecuted_blocks=1 00:07:01.231 00:07:01.231 ' 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.231 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.232 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.804 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:07.805 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:07.805 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:07.805 Found net devices under 0000:af:00.0: cvl_0_0 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:07.805 Found net devices under 0000:af:00.1: cvl_0_1 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:07.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:07:07.805 00:07:07.805 --- 10.0.0.2 ping statistics --- 00:07:07.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.805 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:07:07.805 00:07:07.805 --- 10.0.0.1 ping statistics --- 00:07:07.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.805 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:07.805 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1560749 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1560749 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1560749 ']' 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.805 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.806 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.806 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.806 [2024-12-06 11:08:40.097029] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:07:07.806 [2024-12-06 11:08:40.097084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.806 [2024-12-06 11:08:40.174602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.806 [2024-12-06 11:08:40.214391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.806 [2024-12-06 11:08:40.214423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.806 [2024-12-06 11:08:40.214429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.806 [2024-12-06 11:08:40.214435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.806 [2024-12-06 11:08:40.214439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.806 [2024-12-06 11:08:40.214951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.064 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.064 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:08.064 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:08.064 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:08.064 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.064 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.064 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:08.322 [2024-12-06 11:08:41.120521] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.322 ************************************ 00:07:08.322 START TEST lvs_grow_clean 00:07:08.322 ************************************ 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:08.322 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:08.323 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:08.323 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.323 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.323 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.581 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:08.581 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:08.840 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:08.840 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:08.840 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:08.840 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:08.840 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:08.840 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b6262502-2d30-45a9-89ad-bf67bae64dd9 lvol 150 00:07:09.099 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d96ec9e2-72ac-49b4-aa66-fdac8a9f1265 00:07:09.099 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.099 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:09.357 [2024-12-06 11:08:42.077933] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:09.357 [2024-12-06 11:08:42.077983] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:09.357 true 00:07:09.357 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:09.357 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:09.357 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:09.357 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.614 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d96ec9e2-72ac-49b4-aa66-fdac8a9f1265 00:07:09.872 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:09.872 [2024-12-06 11:08:42.751920] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.872 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1561320 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1561320 /var/tmp/bdevperf.sock 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1561320 ']' 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:10.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.130 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:10.130 [2024-12-06 11:08:42.981500] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:07:10.130 [2024-12-06 11:08:42.981544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1561320 ] 00:07:10.130 [2024-12-06 11:08:43.052762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.388 [2024-12-06 11:08:43.090131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.954 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.954 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:10.954 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:11.212 Nvme0n1 00:07:11.212 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:11.470 [ 00:07:11.470 { 00:07:11.470 "name": "Nvme0n1", 00:07:11.470 "aliases": [ 00:07:11.470 "d96ec9e2-72ac-49b4-aa66-fdac8a9f1265" 00:07:11.470 ], 00:07:11.470 "product_name": "NVMe disk", 00:07:11.470 "block_size": 4096, 00:07:11.470 "num_blocks": 38912, 00:07:11.470 "uuid": "d96ec9e2-72ac-49b4-aa66-fdac8a9f1265", 00:07:11.470 "numa_id": 1, 00:07:11.470 "assigned_rate_limits": { 00:07:11.470 "rw_ios_per_sec": 0, 00:07:11.470 "rw_mbytes_per_sec": 0, 00:07:11.470 "r_mbytes_per_sec": 0, 00:07:11.470 "w_mbytes_per_sec": 0 00:07:11.470 }, 00:07:11.470 "claimed": false, 00:07:11.470 "zoned": false, 00:07:11.470 "supported_io_types": { 00:07:11.470 "read": true, 00:07:11.470 "write": true, 00:07:11.470 "unmap": true, 00:07:11.470 "flush": true, 00:07:11.470 "reset": true, 00:07:11.470 "nvme_admin": true, 00:07:11.470 "nvme_io": true, 00:07:11.470 "nvme_io_md": false, 00:07:11.470 "write_zeroes": true, 00:07:11.470 "zcopy": false, 00:07:11.470 "get_zone_info": false, 00:07:11.470 "zone_management": false, 00:07:11.470 "zone_append": false, 00:07:11.470 "compare": true, 00:07:11.470 "compare_and_write": true, 00:07:11.470 "abort": true, 00:07:11.470 "seek_hole": false, 00:07:11.470 "seek_data": false, 00:07:11.470 "copy": true, 00:07:11.470 "nvme_iov_md": false 00:07:11.470 }, 00:07:11.470 "memory_domains": [ 00:07:11.470 { 00:07:11.470 "dma_device_id": "system", 00:07:11.470 "dma_device_type": 1 00:07:11.470 } 00:07:11.470 ], 00:07:11.470 "driver_specific": { 00:07:11.470 "nvme": [ 00:07:11.470 { 00:07:11.470 "trid": { 00:07:11.470 "trtype": "TCP", 00:07:11.470 "adrfam": "IPv4", 00:07:11.470 "traddr": "10.0.0.2", 00:07:11.470 "trsvcid": "4420", 00:07:11.470 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:11.470 }, 00:07:11.470 "ctrlr_data": { 00:07:11.470 "cntlid": 1, 00:07:11.470 "vendor_id": "0x8086", 00:07:11.470 "model_number": "SPDK bdev Controller", 00:07:11.470 "serial_number": "SPDK0", 00:07:11.470 "firmware_revision": "25.01", 00:07:11.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.470 "oacs": { 00:07:11.470 "security": 0, 00:07:11.470 "format": 0, 00:07:11.470 "firmware": 0, 00:07:11.470 "ns_manage": 0 00:07:11.470 }, 00:07:11.470 "multi_ctrlr": true, 00:07:11.470 "ana_reporting": false 00:07:11.470 }, 00:07:11.470 "vs": { 00:07:11.470 "nvme_version": "1.3" 00:07:11.470 }, 00:07:11.470 "ns_data": { 00:07:11.470 "id": 1, 00:07:11.470 "can_share": true 00:07:11.470 } 00:07:11.470 } 00:07:11.470 ], 00:07:11.470 "mp_policy": "active_passive" 00:07:11.470 } 00:07:11.470 } 00:07:11.470 ] 00:07:11.470 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1561584 00:07:11.470 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:11.470 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:11.470 Running I/O for 10 seconds... 00:07:12.403 Latency(us) 00:07:12.403 [2024-12-06T10:08:45.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.403 Nvme0n1 : 1.00 25359.00 99.06 0.00 0.00 0.00 0.00 0.00 00:07:12.403 [2024-12-06T10:08:45.341Z] =================================================================================================================== 00:07:12.403 [2024-12-06T10:08:45.341Z] Total : 25359.00 99.06 0.00 0.00 0.00 0.00 0.00 00:07:12.403 00:07:13.336 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:13.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.336 Nvme0n1 : 2.00 25459.50 99.45 0.00 0.00 0.00 0.00 0.00 00:07:13.336 [2024-12-06T10:08:46.274Z] =================================================================================================================== 00:07:13.336 [2024-12-06T10:08:46.274Z] Total : 25459.50 99.45 0.00 0.00 0.00 0.00 0.00 00:07:13.336 00:07:13.594 true 00:07:13.594 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:13.594 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:13.853 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:13.853 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:13.853 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1561584 00:07:14.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.419 Nvme0n1 : 3.00 25567.67 99.87 0.00 0.00 0.00 0.00 0.00 00:07:14.419 [2024-12-06T10:08:47.357Z] =================================================================================================================== 00:07:14.419 [2024-12-06T10:08:47.357Z] Total : 25567.67 99.87 0.00 0.00 0.00 0.00 0.00 00:07:14.419 00:07:15.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.356 Nvme0n1 : 4.00 25636.00 100.14 0.00 0.00 0.00 0.00 0.00 00:07:15.356 [2024-12-06T10:08:48.294Z] =================================================================================================================== 00:07:15.356 [2024-12-06T10:08:48.294Z] Total : 25636.00 100.14 0.00 0.00 0.00 0.00 0.00 00:07:15.356 00:07:16.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.732 Nvme0n1 : 5.00 25679.40 100.31 0.00 0.00 0.00 0.00 0.00 00:07:16.732 [2024-12-06T10:08:49.670Z] =================================================================================================================== 00:07:16.732 [2024-12-06T10:08:49.670Z] Total : 25679.40 100.31 0.00 0.00 0.00 0.00 0.00 00:07:16.732 00:07:17.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.667 Nvme0n1 : 6.00 25687.50 100.34 0.00 0.00 0.00 0.00 0.00 00:07:17.667 [2024-12-06T10:08:50.605Z] =================================================================================================================== 00:07:17.667 [2024-12-06T10:08:50.605Z] Total : 25687.50 100.34 0.00 0.00 0.00 0.00 0.00 00:07:17.667 00:07:18.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.607 Nvme0n1 : 7.00 25665.00 100.25 0.00 0.00 0.00 0.00 0.00 00:07:18.607 [2024-12-06T10:08:51.545Z] =================================================================================================================== 00:07:18.607 [2024-12-06T10:08:51.545Z] Total : 25665.00 100.25 0.00 0.00 0.00 0.00 0.00 00:07:18.607 00:07:19.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.544 Nvme0n1 : 8.00 25701.00 100.39 0.00 0.00 0.00 0.00 0.00 00:07:19.544 [2024-12-06T10:08:52.482Z] =================================================================================================================== 00:07:19.544 [2024-12-06T10:08:52.482Z] Total : 25701.00 100.39 0.00 0.00 0.00 0.00 0.00 00:07:19.544 00:07:20.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.481 Nvme0n1 : 9.00 25738.89 100.54 0.00 0.00 0.00 0.00 0.00 00:07:20.481 [2024-12-06T10:08:53.419Z] =================================================================================================================== 00:07:20.481 [2024-12-06T10:08:53.419Z] Total : 25738.89 100.54 0.00 0.00 0.00 0.00 0.00 00:07:20.481 00:07:21.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.459 Nvme0n1 : 10.00 25738.20 100.54 0.00 0.00 0.00 0.00 0.00 00:07:21.459 [2024-12-06T10:08:54.397Z] =================================================================================================================== 00:07:21.459 [2024-12-06T10:08:54.397Z] Total : 25738.20 100.54 0.00 0.00 0.00 0.00 0.00 00:07:21.459 00:07:21.459 00:07:21.459 Latency(us) 00:07:21.459 [2024-12-06T10:08:54.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.459 Nvme0n1 : 10.00 25741.35 100.55 0.00 0.00 4969.86 2889.54 9413.35 00:07:21.459 [2024-12-06T10:08:54.397Z] =================================================================================================================== 00:07:21.459 [2024-12-06T10:08:54.397Z] Total : 25741.35 100.55 0.00 0.00 4969.86 2889.54 9413.35 00:07:21.459 { 00:07:21.459 "results": [ 00:07:21.459 { 00:07:21.459 "job": "Nvme0n1", 00:07:21.459 "core_mask": "0x2", 00:07:21.459 "workload": "randwrite", 00:07:21.459 "status": "finished", 00:07:21.459 "queue_depth": 128, 00:07:21.459 "io_size": 4096, 00:07:21.459 "runtime": 10.003749, 00:07:21.459 "iops": 25741.34956804694, 00:07:21.459 "mibps": 100.55214675018335, 00:07:21.459 "io_failed": 0, 00:07:21.459 "io_timeout": 0, 00:07:21.459 "avg_latency_us": 4969.856781300638, 00:07:21.459 "min_latency_us": 2889.541818181818, 00:07:21.459 "max_latency_us": 9413.352727272728 00:07:21.459 } 00:07:21.459 ], 00:07:21.459 "core_count": 1 00:07:21.459 } 00:07:21.459 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1561320 00:07:21.459 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1561320 ']' 00:07:21.459 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1561320 00:07:21.460 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:21.460 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.460 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1561320 00:07:21.460 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:21.460 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:21.460 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1561320' 00:07:21.460 killing process with pid 1561320 00:07:21.460 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1561320 00:07:21.460 Received shutdown signal, test time was about 10.000000 seconds 00:07:21.460 00:07:21.460 Latency(us) 00:07:21.460 [2024-12-06T10:08:54.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.460 [2024-12-06T10:08:54.398Z] =================================================================================================================== 00:07:21.460 [2024-12-06T10:08:54.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:21.460 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1561320 00:07:21.749 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:22.008 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:22.008 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:22.008 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:22.267 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:22.267 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:22.267 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.526 [2024-12-06 11:08:55.231686] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:22.526 request: 00:07:22.526 { 00:07:22.526 "uuid": "b6262502-2d30-45a9-89ad-bf67bae64dd9", 00:07:22.526 "method": "bdev_lvol_get_lvstores", 00:07:22.526 "req_id": 1 00:07:22.526 } 00:07:22.526 Got JSON-RPC error response 00:07:22.526 response: 00:07:22.526 { 00:07:22.526 "code": -19, 00:07:22.526 "message": "No such device" 00:07:22.526 } 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.526 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.784 aio_bdev 00:07:22.784 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d96ec9e2-72ac-49b4-aa66-fdac8a9f1265 00:07:22.784 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d96ec9e2-72ac-49b4-aa66-fdac8a9f1265 00:07:22.784 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.784 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:22.784 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.784 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.784 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:23.042 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d96ec9e2-72ac-49b4-aa66-fdac8a9f1265 -t 2000 00:07:23.042 [ 00:07:23.042 { 00:07:23.042 "name": "d96ec9e2-72ac-49b4-aa66-fdac8a9f1265", 00:07:23.042 "aliases": [ 00:07:23.042 "lvs/lvol" 00:07:23.042 ], 00:07:23.042 "product_name": "Logical Volume", 00:07:23.042 "block_size": 4096, 00:07:23.042 "num_blocks": 38912, 00:07:23.042 "uuid": "d96ec9e2-72ac-49b4-aa66-fdac8a9f1265", 00:07:23.042 "assigned_rate_limits": { 00:07:23.042 "rw_ios_per_sec": 0, 00:07:23.042 "rw_mbytes_per_sec": 0, 00:07:23.042 "r_mbytes_per_sec": 0, 00:07:23.042 "w_mbytes_per_sec": 0 00:07:23.042 }, 00:07:23.042 "claimed": false, 00:07:23.042 "zoned": false, 00:07:23.042 "supported_io_types": { 00:07:23.042 "read": true, 00:07:23.042 "write": true, 00:07:23.042 "unmap": true, 00:07:23.042 "flush": false, 00:07:23.042 "reset": true, 00:07:23.042 "nvme_admin": false, 00:07:23.042 "nvme_io": false, 00:07:23.042 "nvme_io_md": false, 00:07:23.042 "write_zeroes": true, 00:07:23.042 "zcopy": false, 00:07:23.042 "get_zone_info": false, 00:07:23.042 "zone_management": false, 00:07:23.042 "zone_append": false, 00:07:23.042 "compare": false, 00:07:23.042 "compare_and_write": false, 00:07:23.042 "abort": false, 00:07:23.042 "seek_hole": true, 00:07:23.042 "seek_data": true, 00:07:23.042 "copy": false, 00:07:23.042 "nvme_iov_md": false 00:07:23.042 }, 00:07:23.042 "driver_specific": { 00:07:23.042 "lvol": { 00:07:23.042 "lvol_store_uuid": "b6262502-2d30-45a9-89ad-bf67bae64dd9", 00:07:23.042 "base_bdev": "aio_bdev", 00:07:23.042 "thin_provision": false, 00:07:23.042 "num_allocated_clusters": 38, 00:07:23.042 "snapshot": false, 00:07:23.042 "clone": false, 00:07:23.042 "esnap_clone": false 00:07:23.042 } 00:07:23.042 } 00:07:23.042 } 00:07:23.042 ] 00:07:23.042 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:23.042 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:23.042 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:23.301 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:23.301 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:23.301 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:23.560 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:23.560 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d96ec9e2-72ac-49b4-aa66-fdac8a9f1265 00:07:23.560 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6262502-2d30-45a9-89ad-bf67bae64dd9 00:07:23.818 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.077 00:07:24.077 real 0m15.706s 00:07:24.077 user 0m15.245s 00:07:24.077 sys 0m1.568s 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:24.077 ************************************ 00:07:24.077 END TEST lvs_grow_clean 00:07:24.077 ************************************ 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:24.077 ************************************ 00:07:24.077 START TEST lvs_grow_dirty 00:07:24.077 ************************************ 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.077 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:24.336 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:24.336 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:24.596 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:24.596 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:24.596 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:24.855 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:24.855 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:24.855 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a lvol 150 00:07:24.855 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9166711f-0aeb-4c9c-b51f-503adb3865a5 00:07:24.855 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.855 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:25.115 [2024-12-06 11:08:57.886859] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:25.115 [2024-12-06 11:08:57.886911] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:25.115 true 00:07:25.115 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:25.115 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:25.374 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:25.374 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.374 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9166711f-0aeb-4c9c-b51f-503adb3865a5 00:07:25.633 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:25.633 [2024-12-06 11:08:58.540793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.633 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.891 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:25.891 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1564287 00:07:25.891 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.892 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1564287 /var/tmp/bdevperf.sock 00:07:25.892 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1564287 ']' 00:07:25.892 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:25.892 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.892 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:25.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:25.892 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.892 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.892 [2024-12-06 11:08:58.735568] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:07:25.892 [2024-12-06 11:08:58.735610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1564287 ] 00:07:25.892 [2024-12-06 11:08:58.805749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.150 [2024-12-06 11:08:58.843140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.150 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.150 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:26.150 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:26.410 Nvme0n1 00:07:26.410 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:26.669 [ 00:07:26.669 { 00:07:26.669 "name": "Nvme0n1", 00:07:26.669 "aliases": [ 00:07:26.669 "9166711f-0aeb-4c9c-b51f-503adb3865a5" 00:07:26.669 ], 00:07:26.669 "product_name": "NVMe disk", 00:07:26.669 "block_size": 4096, 00:07:26.669 "num_blocks": 38912, 00:07:26.669 "uuid": "9166711f-0aeb-4c9c-b51f-503adb3865a5", 00:07:26.669 "numa_id": 1, 00:07:26.669 "assigned_rate_limits": { 00:07:26.669 "rw_ios_per_sec": 0, 00:07:26.669 "rw_mbytes_per_sec": 0, 00:07:26.669 "r_mbytes_per_sec": 0, 00:07:26.669 "w_mbytes_per_sec": 0 00:07:26.669 }, 00:07:26.669 "claimed": false, 00:07:26.669 "zoned": false, 00:07:26.669 "supported_io_types": { 00:07:26.669 "read": true, 00:07:26.669 "write": true, 00:07:26.669 "unmap": true, 00:07:26.669 "flush": true, 00:07:26.669 "reset": true, 00:07:26.669 "nvme_admin": true, 00:07:26.669 "nvme_io": true, 00:07:26.669 "nvme_io_md": false, 00:07:26.669 "write_zeroes": true, 00:07:26.669 "zcopy": false, 00:07:26.669 "get_zone_info": false, 00:07:26.669 "zone_management": false, 00:07:26.669 "zone_append": false, 00:07:26.669 "compare": true, 00:07:26.669 "compare_and_write": true, 00:07:26.669 "abort": true, 00:07:26.669 "seek_hole": false, 00:07:26.669 "seek_data": false, 00:07:26.669 "copy": true, 00:07:26.669 "nvme_iov_md": false 00:07:26.669 }, 00:07:26.669 "memory_domains": [ 00:07:26.669 { 00:07:26.669 "dma_device_id": "system", 00:07:26.669 "dma_device_type": 1 00:07:26.669 } 00:07:26.669 ], 00:07:26.669 "driver_specific": { 00:07:26.669 "nvme": [ 00:07:26.669 { 00:07:26.669 "trid": { 00:07:26.669 "trtype": "TCP", 00:07:26.669 "adrfam": "IPv4", 00:07:26.669 "traddr": "10.0.0.2", 00:07:26.669 "trsvcid": "4420", 00:07:26.669 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:26.669 }, 00:07:26.669 "ctrlr_data": { 00:07:26.669 "cntlid": 1, 00:07:26.669 "vendor_id": "0x8086", 00:07:26.669 "model_number": "SPDK bdev Controller", 00:07:26.669 "serial_number": "SPDK0", 00:07:26.669 "firmware_revision": "25.01", 00:07:26.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.669 "oacs": { 00:07:26.669 "security": 0, 00:07:26.669 "format": 0, 00:07:26.669 "firmware": 0, 00:07:26.669 "ns_manage": 0 00:07:26.669 }, 00:07:26.669 "multi_ctrlr": true, 00:07:26.669 "ana_reporting": false 00:07:26.669 }, 00:07:26.669 "vs": { 00:07:26.669 "nvme_version": "1.3" 00:07:26.669 }, 00:07:26.669 "ns_data": { 00:07:26.669 "id": 1, 00:07:26.669 "can_share": true 00:07:26.669 } 00:07:26.669 } 00:07:26.669 ], 00:07:26.669 "mp_policy": "active_passive" 00:07:26.669 } 00:07:26.669 } 00:07:26.669 ] 00:07:26.669 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1564297 00:07:26.669 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:26.669 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:26.669 Running I/O for 10 seconds... 00:07:27.607 Latency(us) 00:07:27.607 [2024-12-06T10:09:00.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.607 Nvme0n1 : 1.00 24985.00 97.60 0.00 0.00 0.00 0.00 0.00 00:07:27.607 [2024-12-06T10:09:00.545Z] =================================================================================================================== 00:07:27.607 [2024-12-06T10:09:00.545Z] Total : 24985.00 97.60 0.00 0.00 0.00 0.00 0.00 00:07:27.607 00:07:28.543 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:28.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.801 Nvme0n1 : 2.00 25322.50 98.92 0.00 0.00 0.00 0.00 0.00 00:07:28.801 [2024-12-06T10:09:01.739Z] =================================================================================================================== 00:07:28.801 [2024-12-06T10:09:01.739Z] Total : 25322.50 98.92 0.00 0.00 0.00 0.00 0.00 00:07:28.801 00:07:28.801 true 00:07:28.801 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:28.801 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:29.060 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:29.060 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:29.060 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1564297 00:07:29.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.626 Nvme0n1 : 3.00 25396.67 99.21 0.00 0.00 0.00 0.00 0.00 00:07:29.626 [2024-12-06T10:09:02.564Z] =================================================================================================================== 00:07:29.626 [2024-12-06T10:09:02.564Z] Total : 25396.67 99.21 0.00 0.00 0.00 0.00 0.00 00:07:29.626 00:07:31.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.001 Nvme0n1 : 4.00 25472.75 99.50 0.00 0.00 0.00 0.00 0.00 00:07:31.001 [2024-12-06T10:09:03.939Z] =================================================================================================================== 00:07:31.001 [2024-12-06T10:09:03.939Z] Total : 25472.75 99.50 0.00 0.00 0.00 0.00 0.00 00:07:31.001 00:07:31.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.568 Nvme0n1 : 5.00 25572.60 99.89 0.00 0.00 0.00 0.00 0.00 00:07:31.568 [2024-12-06T10:09:04.506Z] =================================================================================================================== 00:07:31.568 [2024-12-06T10:09:04.506Z] Total : 25572.60 99.89 0.00 0.00 0.00 0.00 0.00 00:07:31.568 00:07:32.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.950 Nvme0n1 : 6.00 25650.67 100.20 0.00 0.00 0.00 0.00 0.00 00:07:32.950 [2024-12-06T10:09:05.888Z] =================================================================================================================== 00:07:32.950 [2024-12-06T10:09:05.888Z] Total : 25650.67 100.20 0.00 0.00 0.00 0.00 0.00 00:07:32.950 00:07:33.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.887 Nvme0n1 : 7.00 25701.43 100.40 0.00 0.00 0.00 0.00 0.00 00:07:33.887 [2024-12-06T10:09:06.825Z] =================================================================================================================== 00:07:33.887 [2024-12-06T10:09:06.825Z] Total : 25701.43 100.40 0.00 0.00 0.00 0.00 0.00 00:07:33.887 00:07:34.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.825 Nvme0n1 : 8.00 25723.50 100.48 0.00 0.00 0.00 0.00 0.00 00:07:34.825 [2024-12-06T10:09:07.763Z] =================================================================================================================== 00:07:34.825 [2024-12-06T10:09:07.763Z] Total : 25723.50 100.48 0.00 0.00 0.00 0.00 0.00 00:07:34.825 00:07:35.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.764 Nvme0n1 : 9.00 25748.67 100.58 0.00 0.00 0.00 0.00 0.00 00:07:35.764 [2024-12-06T10:09:08.702Z] =================================================================================================================== 00:07:35.764 [2024-12-06T10:09:08.702Z] Total : 25748.67 100.58 0.00 0.00 0.00 0.00 0.00 00:07:35.764 00:07:36.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.703 Nvme0n1 : 10.00 25766.50 100.65 0.00 0.00 0.00 0.00 0.00 00:07:36.703 [2024-12-06T10:09:09.641Z] =================================================================================================================== 00:07:36.703 [2024-12-06T10:09:09.641Z] Total : 25766.50 100.65 0.00 0.00 0.00 0.00 0.00 00:07:36.703 00:07:36.703 00:07:36.703 Latency(us) 00:07:36.703 [2024-12-06T10:09:09.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.703 Nvme0n1 : 10.01 25766.00 100.65 0.00 0.00 4965.20 2159.71 14656.23 00:07:36.703 [2024-12-06T10:09:09.641Z] =================================================================================================================== 00:07:36.703 [2024-12-06T10:09:09.641Z] Total : 25766.00 100.65 0.00 0.00 4965.20 2159.71 14656.23 00:07:36.703 { 00:07:36.703 "results": [ 00:07:36.703 { 00:07:36.703 "job": "Nvme0n1", 00:07:36.703 "core_mask": "0x2", 00:07:36.703 "workload": "randwrite", 00:07:36.703 "status": "finished", 00:07:36.703 "queue_depth": 128, 00:07:36.703 "io_size": 4096, 00:07:36.703 "runtime": 10.005161, 00:07:36.703 "iops": 25766.00216628198, 00:07:36.703 "mibps": 100.64844596203899, 00:07:36.703 "io_failed": 0, 00:07:36.703 "io_timeout": 0, 00:07:36.703 "avg_latency_us": 4965.203496561548, 00:07:36.703 "min_latency_us": 2159.7090909090907, 00:07:36.703 "max_latency_us": 14656.232727272727 00:07:36.703 } 00:07:36.703 ], 00:07:36.703 "core_count": 1 00:07:36.703 } 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1564287 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1564287 ']' 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1564287 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1564287 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1564287' 00:07:36.703 killing process with pid 1564287 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1564287 00:07:36.703 Received shutdown signal, test time was about 10.000000 seconds 00:07:36.703 00:07:36.703 Latency(us) 00:07:36.703 [2024-12-06T10:09:09.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.703 [2024-12-06T10:09:09.641Z] =================================================================================================================== 00:07:36.703 [2024-12-06T10:09:09.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:36.703 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1564287 00:07:36.962 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:37.221 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1560749 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1560749 00:07:37.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1560749 Killed "${NVMF_APP[@]}" "$@" 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1566913 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1566913 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1566913 ']' 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.480 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.741 [2024-12-06 11:09:10.447722] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:07:37.741 [2024-12-06 11:09:10.447769] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.741 [2024-12-06 11:09:10.525940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.741 [2024-12-06 11:09:10.563995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.741 [2024-12-06 11:09:10.564031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.741 [2024-12-06 11:09:10.564037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.741 [2024-12-06 11:09:10.564043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.741 [2024-12-06 11:09:10.564048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.741 [2024-12-06 11:09:10.564632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.741 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.741 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:37.741 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.741 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.741 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.000 [2024-12-06 11:09:10.850145] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:38.000 [2024-12-06 11:09:10.850223] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:38.000 [2024-12-06 11:09:10.850248] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9166711f-0aeb-4c9c-b51f-503adb3865a5 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9166711f-0aeb-4c9c-b51f-503adb3865a5 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.000 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:38.259 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9166711f-0aeb-4c9c-b51f-503adb3865a5 -t 2000 00:07:38.518 [ 00:07:38.518 { 00:07:38.518 "name": "9166711f-0aeb-4c9c-b51f-503adb3865a5", 00:07:38.518 "aliases": [ 00:07:38.518 "lvs/lvol" 00:07:38.518 ], 00:07:38.518 "product_name": "Logical Volume", 00:07:38.518 "block_size": 4096, 00:07:38.518 "num_blocks": 38912, 00:07:38.518 "uuid": "9166711f-0aeb-4c9c-b51f-503adb3865a5", 00:07:38.518 "assigned_rate_limits": { 00:07:38.518 "rw_ios_per_sec": 0, 00:07:38.518 "rw_mbytes_per_sec": 0, 00:07:38.518 "r_mbytes_per_sec": 0, 00:07:38.518 "w_mbytes_per_sec": 0 00:07:38.518 }, 00:07:38.518 "claimed": false, 00:07:38.518 "zoned": false, 00:07:38.518 "supported_io_types": { 00:07:38.518 "read": true, 00:07:38.518 "write": true, 00:07:38.518 "unmap": true, 00:07:38.518 "flush": false, 00:07:38.518 "reset": true, 00:07:38.518 "nvme_admin": false, 00:07:38.518 "nvme_io": false, 00:07:38.518 "nvme_io_md": false, 00:07:38.518 "write_zeroes": true, 00:07:38.518 "zcopy": false, 00:07:38.518 "get_zone_info": false, 00:07:38.518 "zone_management": false, 00:07:38.518 "zone_append": false, 00:07:38.518 "compare": false, 00:07:38.518 "compare_and_write": false, 00:07:38.518 "abort": false, 00:07:38.518 "seek_hole": true, 00:07:38.518 "seek_data": true, 00:07:38.518 "copy": false, 00:07:38.518 "nvme_iov_md": false 00:07:38.518 }, 00:07:38.518 "driver_specific": { 00:07:38.518 "lvol": { 00:07:38.518 "lvol_store_uuid": "25d07251-c3a7-48b9-9dcc-f43507c6f93a", 00:07:38.518 "base_bdev": "aio_bdev", 00:07:38.518 "thin_provision": false, 00:07:38.518 "num_allocated_clusters": 38, 00:07:38.518 "snapshot": false, 00:07:38.518 "clone": false, 00:07:38.518 "esnap_clone": false 00:07:38.518 } 00:07:38.518 } 00:07:38.518 } 00:07:38.518 ] 00:07:38.518 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:38.518 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:38.518 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:38.518 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:38.518 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:38.518 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:38.777 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:38.777 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:39.037 [2024-12-06 11:09:11.730903] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:39.037 request: 00:07:39.037 { 00:07:39.037 "uuid": "25d07251-c3a7-48b9-9dcc-f43507c6f93a", 00:07:39.037 "method": "bdev_lvol_get_lvstores", 00:07:39.037 "req_id": 1 00:07:39.037 } 00:07:39.037 Got JSON-RPC error response 00:07:39.037 response: 00:07:39.037 { 00:07:39.037 "code": -19, 00:07:39.037 "message": "No such device" 00:07:39.037 } 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.037 11:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.296 aio_bdev 00:07:39.296 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9166711f-0aeb-4c9c-b51f-503adb3865a5 00:07:39.296 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9166711f-0aeb-4c9c-b51f-503adb3865a5 00:07:39.296 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.296 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:39.296 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.296 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.296 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:39.555 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9166711f-0aeb-4c9c-b51f-503adb3865a5 -t 2000 00:07:39.555 [ 00:07:39.555 { 00:07:39.555 "name": "9166711f-0aeb-4c9c-b51f-503adb3865a5", 00:07:39.555 "aliases": [ 00:07:39.555 "lvs/lvol" 00:07:39.555 ], 00:07:39.555 "product_name": "Logical Volume", 00:07:39.555 "block_size": 4096, 00:07:39.555 "num_blocks": 38912, 00:07:39.555 "uuid": "9166711f-0aeb-4c9c-b51f-503adb3865a5", 00:07:39.555 "assigned_rate_limits": { 00:07:39.555 "rw_ios_per_sec": 0, 00:07:39.555 "rw_mbytes_per_sec": 0, 00:07:39.555 "r_mbytes_per_sec": 0, 00:07:39.555 "w_mbytes_per_sec": 0 00:07:39.555 }, 00:07:39.555 "claimed": false, 00:07:39.555 "zoned": false, 00:07:39.555 "supported_io_types": { 00:07:39.555 "read": true, 00:07:39.555 "write": true, 00:07:39.555 "unmap": true, 00:07:39.555 "flush": false, 00:07:39.555 "reset": true, 00:07:39.555 "nvme_admin": false, 00:07:39.555 "nvme_io": false, 00:07:39.555 "nvme_io_md": false, 00:07:39.555 "write_zeroes": true, 00:07:39.555 "zcopy": false, 00:07:39.555 "get_zone_info": false, 00:07:39.555 "zone_management": false, 00:07:39.555 "zone_append": false, 00:07:39.555 "compare": false, 00:07:39.555 "compare_and_write": false, 00:07:39.555 "abort": false, 00:07:39.555 "seek_hole": true, 00:07:39.555 "seek_data": true, 00:07:39.555 "copy": false, 00:07:39.555 "nvme_iov_md": false 00:07:39.555 }, 00:07:39.555 "driver_specific": { 00:07:39.555 "lvol": { 00:07:39.555 "lvol_store_uuid": "25d07251-c3a7-48b9-9dcc-f43507c6f93a", 00:07:39.555 "base_bdev": "aio_bdev", 00:07:39.555 "thin_provision": false, 00:07:39.555 "num_allocated_clusters": 38, 00:07:39.555 "snapshot": false, 00:07:39.555 "clone": false, 00:07:39.555 "esnap_clone": false 00:07:39.556 } 00:07:39.556 } 00:07:39.556 } 00:07:39.556 ] 00:07:39.556 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:39.556 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:39.556 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:39.815 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:39.815 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:39.815 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:40.074 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:40.074 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9166711f-0aeb-4c9c-b51f-503adb3865a5 00:07:40.074 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 25d07251-c3a7-48b9-9dcc-f43507c6f93a 00:07:40.334 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:40.593 00:07:40.593 real 0m16.414s 00:07:40.593 user 0m43.664s 00:07:40.593 sys 0m3.689s 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.593 ************************************ 00:07:40.593 END TEST lvs_grow_dirty 00:07:40.593 ************************************ 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:40.593 nvmf_trace.0 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.593 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.593 rmmod nvme_tcp 00:07:40.593 rmmod nvme_fabrics 00:07:40.593 rmmod nvme_keyring 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1566913 ']' 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1566913 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1566913 ']' 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1566913 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566913 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566913' 00:07:40.852 killing process with pid 1566913 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1566913 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1566913 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.852 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.853 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.853 11:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.387 00:07:43.387 real 0m42.082s 00:07:43.387 user 1m4.509s 00:07:43.387 sys 0m10.214s 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.387 ************************************ 00:07:43.387 END TEST nvmf_lvs_grow 00:07:43.387 ************************************ 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.387 ************************************ 00:07:43.387 START TEST nvmf_bdev_io_wait 00:07:43.387 ************************************ 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.387 * Looking for test storage... 00:07:43.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.387 11:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:43.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.387 --rc genhtml_branch_coverage=1 00:07:43.387 --rc genhtml_function_coverage=1 00:07:43.387 --rc genhtml_legend=1 00:07:43.387 --rc geninfo_all_blocks=1 00:07:43.387 --rc geninfo_unexecuted_blocks=1 00:07:43.387 00:07:43.387 ' 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:43.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.387 --rc genhtml_branch_coverage=1 00:07:43.387 --rc genhtml_function_coverage=1 00:07:43.387 --rc genhtml_legend=1 00:07:43.387 --rc geninfo_all_blocks=1 00:07:43.387 --rc geninfo_unexecuted_blocks=1 00:07:43.387 00:07:43.387 ' 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:43.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.387 --rc genhtml_branch_coverage=1 00:07:43.387 --rc genhtml_function_coverage=1 00:07:43.387 --rc genhtml_legend=1 00:07:43.387 --rc geninfo_all_blocks=1 00:07:43.387 --rc geninfo_unexecuted_blocks=1 00:07:43.387 00:07:43.387 ' 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:43.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.387 --rc genhtml_branch_coverage=1 00:07:43.387 --rc genhtml_function_coverage=1 00:07:43.387 --rc genhtml_legend=1 00:07:43.387 --rc geninfo_all_blocks=1 00:07:43.387 --rc geninfo_unexecuted_blocks=1 00:07:43.387 00:07:43.387 ' 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.387 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:43.388 11:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:49.954 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:49.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:49.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:49.955 Found net devices under 0000:af:00.0: cvl_0_0 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:49.955 Found net devices under 0000:af:00.1: cvl_0_1 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.955 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:49.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:07:49.955 00:07:49.955 --- 10.0.0.2 ping statistics --- 00:07:49.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.955 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:07:49.955 00:07:49.955 --- 10.0.0.1 ping statistics --- 00:07:49.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.955 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:49.955 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1571239 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1571239 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1571239 ']' 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.956 11:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.956 [2024-12-06 11:09:22.257524] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:07:49.956 [2024-12-06 11:09:22.257576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.956 [2024-12-06 11:09:22.332157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.956 [2024-12-06 11:09:22.373188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.956 [2024-12-06 11:09:22.373225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.956 [2024-12-06 11:09:22.373231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.956 [2024-12-06 11:09:22.373237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.956 [2024-12-06 11:09:22.373242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.956 [2024-12-06 11:09:22.374637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.956 [2024-12-06 11:09:22.374752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.956 [2024-12-06 11:09:22.374854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.956 [2024-12-06 11:09:22.374854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.215 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.474 [2024-12-06 11:09:23.179729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.474 Malloc0 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.474 [2024-12-06 11:09:23.234506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1571350 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1571354 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.474 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.474 { 00:07:50.474 "params": { 00:07:50.474 "name": "Nvme$subsystem", 00:07:50.475 "trtype": "$TEST_TRANSPORT", 00:07:50.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.475 "adrfam": "ipv4", 00:07:50.475 "trsvcid": "$NVMF_PORT", 00:07:50.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.475 "hdgst": ${hdgst:-false}, 00:07:50.475 "ddgst": ${ddgst:-false} 00:07:50.475 }, 00:07:50.475 "method": "bdev_nvme_attach_controller" 00:07:50.475 } 00:07:50.475 EOF 00:07:50.475 )") 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1571357 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.475 { 00:07:50.475 "params": { 00:07:50.475 "name": "Nvme$subsystem", 00:07:50.475 "trtype": "$TEST_TRANSPORT", 00:07:50.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.475 "adrfam": "ipv4", 00:07:50.475 "trsvcid": "$NVMF_PORT", 00:07:50.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.475 "hdgst": ${hdgst:-false}, 00:07:50.475 "ddgst": ${ddgst:-false} 00:07:50.475 }, 00:07:50.475 "method": "bdev_nvme_attach_controller" 00:07:50.475 } 00:07:50.475 EOF 00:07:50.475 )") 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1571361 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.475 { 00:07:50.475 "params": { 00:07:50.475 "name": "Nvme$subsystem", 00:07:50.475 "trtype": "$TEST_TRANSPORT", 00:07:50.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.475 "adrfam": "ipv4", 00:07:50.475 "trsvcid": "$NVMF_PORT", 00:07:50.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.475 "hdgst": ${hdgst:-false}, 00:07:50.475 "ddgst": ${ddgst:-false} 00:07:50.475 }, 00:07:50.475 "method": "bdev_nvme_attach_controller" 00:07:50.475 } 00:07:50.475 EOF 00:07:50.475 )") 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.475 { 00:07:50.475 "params": { 00:07:50.475 "name": "Nvme$subsystem", 00:07:50.475 "trtype": "$TEST_TRANSPORT", 00:07:50.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.475 "adrfam": "ipv4", 00:07:50.475 "trsvcid": "$NVMF_PORT", 00:07:50.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.475 "hdgst": ${hdgst:-false}, 00:07:50.475 "ddgst": ${ddgst:-false} 00:07:50.475 }, 00:07:50.475 "method": "bdev_nvme_attach_controller" 00:07:50.475 } 00:07:50.475 EOF 00:07:50.475 )") 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1571350 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.475 "params": { 00:07:50.475 "name": "Nvme1", 00:07:50.475 "trtype": "tcp", 00:07:50.475 "traddr": "10.0.0.2", 00:07:50.475 "adrfam": "ipv4", 00:07:50.475 "trsvcid": "4420", 00:07:50.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.475 "hdgst": false, 00:07:50.475 "ddgst": false 00:07:50.475 }, 00:07:50.475 "method": "bdev_nvme_attach_controller" 00:07:50.475 }' 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.475 "params": { 00:07:50.475 "name": "Nvme1", 00:07:50.475 "trtype": "tcp", 00:07:50.475 "traddr": "10.0.0.2", 00:07:50.475 "adrfam": "ipv4", 00:07:50.475 "trsvcid": "4420", 00:07:50.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.475 "hdgst": false, 00:07:50.475 "ddgst": false 00:07:50.475 }, 00:07:50.475 "method": "bdev_nvme_attach_controller" 00:07:50.475 }' 00:07:50.475 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.476 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.476 "params": { 00:07:50.476 "name": "Nvme1", 00:07:50.476 "trtype": "tcp", 00:07:50.476 "traddr": "10.0.0.2", 00:07:50.476 "adrfam": "ipv4", 00:07:50.476 "trsvcid": "4420", 00:07:50.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.476 "hdgst": false, 00:07:50.476 "ddgst": false 00:07:50.476 }, 00:07:50.476 "method": "bdev_nvme_attach_controller" 00:07:50.476 }' 00:07:50.476 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.476 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.476 "params": { 00:07:50.476 "name": "Nvme1", 00:07:50.476 "trtype": "tcp", 00:07:50.476 "traddr": "10.0.0.2", 00:07:50.476 "adrfam": "ipv4", 00:07:50.476 "trsvcid": "4420", 00:07:50.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.476 "hdgst": false, 00:07:50.476 "ddgst": false 00:07:50.476 }, 00:07:50.476 "method": "bdev_nvme_attach_controller" 00:07:50.476 }' 00:07:50.476 [2024-12-06 11:09:23.286042] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:07:50.476 [2024-12-06 11:09:23.286115] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:50.476 [2024-12-06 11:09:23.286497] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:07:50.476 [2024-12-06 11:09:23.286534] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:50.476 [2024-12-06 11:09:23.290172] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:07:50.476 [2024-12-06 11:09:23.290178] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:07:50.476 [2024-12-06 11:09:23.290218] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 11:09:23.290218] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:50.476 --proc-type=auto ] 00:07:50.735 [2024-12-06 11:09:23.474486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.735 [2024-12-06 11:09:23.514995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:50.735 [2024-12-06 11:09:23.558379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.735 [2024-12-06 11:09:23.612274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.735 [2024-12-06 11:09:23.615964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.735 [2024-12-06 11:09:23.656236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:50.993 [2024-12-06 11:09:23.675804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.993 [2024-12-06 11:09:23.715625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:50.993 Running I/O for 1 seconds... 00:07:50.993 Running I/O for 1 seconds... 00:07:50.993 Running I/O for 1 seconds... 00:07:51.252 Running I/O for 1 seconds... 00:07:52.189 8323.00 IOPS, 32.51 MiB/s 00:07:52.189 Latency(us) 00:07:52.189 [2024-12-06T10:09:25.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.189 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:52.189 Nvme1n1 : 1.01 8360.11 32.66 0.00 0.00 15259.53 4408.79 22878.02 00:07:52.189 [2024-12-06T10:09:25.127Z] =================================================================================================================== 00:07:52.189 [2024-12-06T10:09:25.127Z] Total : 8360.11 32.66 0.00 0.00 15259.53 4408.79 22878.02 00:07:52.189 11989.00 IOPS, 46.83 MiB/s 00:07:52.189 Latency(us) 00:07:52.189 [2024-12-06T10:09:25.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.189 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:52.189 Nvme1n1 : 1.01 12034.50 47.01 0.00 0.00 10597.99 6166.34 22878.02 00:07:52.189 [2024-12-06T10:09:25.127Z] =================================================================================================================== 00:07:52.189 [2024-12-06T10:09:25.127Z] Total : 12034.50 47.01 0.00 0.00 10597.99 6166.34 22878.02 00:07:52.189 7924.00 IOPS, 30.95 MiB/s 00:07:52.189 Latency(us) 00:07:52.189 [2024-12-06T10:09:25.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.189 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:52.189 Nvme1n1 : 1.00 8041.58 31.41 0.00 0.00 15891.47 2174.60 38130.04 00:07:52.189 [2024-12-06T10:09:25.127Z] =================================================================================================================== 00:07:52.189 [2024-12-06T10:09:25.127Z] Total : 8041.58 31.41 0.00 0.00 15891.47 2174.60 38130.04 00:07:52.189 11:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1571354 00:07:52.189 11:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1571357 00:07:52.189 265096.00 IOPS, 1035.53 MiB/s 00:07:52.189 Latency(us) 00:07:52.189 [2024-12-06T10:09:25.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.189 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:52.189 Nvme1n1 : 1.00 264729.67 1034.10 0.00 0.00 480.91 202.01 1385.19 00:07:52.189 [2024-12-06T10:09:25.127Z] =================================================================================================================== 00:07:52.189 [2024-12-06T10:09:25.127Z] Total : 264729.67 1034.10 0.00 0.00 480.91 202.01 1385.19 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1571361 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:52.447 rmmod nvme_tcp 00:07:52.447 rmmod nvme_fabrics 00:07:52.447 rmmod nvme_keyring 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1571239 ']' 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1571239 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1571239 ']' 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1571239 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1571239 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1571239' 00:07:52.447 killing process with pid 1571239 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1571239 00:07:52.447 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1571239 00:07:52.705 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:52.705 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:52.705 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:52.705 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:52.706 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:52.706 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:52.706 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:52.706 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:52.706 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:52.706 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.706 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.706 11:09:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.612 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.612 00:07:54.612 real 0m11.615s 00:07:54.612 user 0m19.463s 00:07:54.612 sys 0m6.148s 00:07:54.612 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.612 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.612 ************************************ 00:07:54.612 END TEST nvmf_bdev_io_wait 00:07:54.612 ************************************ 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.873 ************************************ 00:07:54.873 START TEST nvmf_queue_depth 00:07:54.873 ************************************ 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:54.873 * Looking for test storage... 00:07:54.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:54.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.873 --rc genhtml_branch_coverage=1 00:07:54.873 --rc genhtml_function_coverage=1 00:07:54.873 --rc genhtml_legend=1 00:07:54.873 --rc geninfo_all_blocks=1 00:07:54.873 --rc geninfo_unexecuted_blocks=1 00:07:54.873 00:07:54.873 ' 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:54.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.873 --rc genhtml_branch_coverage=1 00:07:54.873 --rc genhtml_function_coverage=1 00:07:54.873 --rc genhtml_legend=1 00:07:54.873 --rc geninfo_all_blocks=1 00:07:54.873 --rc geninfo_unexecuted_blocks=1 00:07:54.873 00:07:54.873 ' 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:54.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.873 --rc genhtml_branch_coverage=1 00:07:54.873 --rc genhtml_function_coverage=1 00:07:54.873 --rc genhtml_legend=1 00:07:54.873 --rc geninfo_all_blocks=1 00:07:54.873 --rc geninfo_unexecuted_blocks=1 00:07:54.873 00:07:54.873 ' 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:54.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.873 --rc genhtml_branch_coverage=1 00:07:54.873 --rc genhtml_function_coverage=1 00:07:54.873 --rc genhtml_legend=1 00:07:54.873 --rc geninfo_all_blocks=1 00:07:54.873 --rc geninfo_unexecuted_blocks=1 00:07:54.873 00:07:54.873 ' 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.873 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.874 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.133 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:55.133 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:55.133 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:55.133 11:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:01.827 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:01.827 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:01.827 Found net devices under 0000:af:00.0: cvl_0_0 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:01.827 Found net devices under 0000:af:00.1: cvl_0_1 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:01.827 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:08:01.828 00:08:01.828 --- 10.0.0.2 ping statistics --- 00:08:01.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.828 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:08:01.828 00:08:01.828 --- 10.0.0.1 ping statistics --- 00:08:01.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.828 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1575527 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1575527 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1575527 ']' 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.828 11:09:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.828 [2024-12-06 11:09:33.906461] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:08:01.828 [2024-12-06 11:09:33.906500] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.828 [2024-12-06 11:09:33.983702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.828 [2024-12-06 11:09:34.019448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.828 [2024-12-06 11:09:34.019481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.828 [2024-12-06 11:09:34.019487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.828 [2024-12-06 11:09:34.019492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.828 [2024-12-06 11:09:34.019497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.828 [2024-12-06 11:09:34.020038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.828 [2024-12-06 11:09:34.756096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.828 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.088 Malloc0 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.088 [2024-12-06 11:09:34.806093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1575591 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1575591 /var/tmp/bdevperf.sock 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1575591 ']' 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.088 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.088 [2024-12-06 11:09:34.856542] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:08:02.088 [2024-12-06 11:09:34.856581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575591 ] 00:08:02.088 [2024-12-06 11:09:34.926498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.088 [2024-12-06 11:09:34.967153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.346 11:09:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.347 11:09:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:02.347 11:09:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:02.347 11:09:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.347 11:09:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.347 NVMe0n1 00:08:02.347 11:09:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.347 11:09:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:02.347 Running I/O for 10 seconds... 00:08:04.396 12801.00 IOPS, 50.00 MiB/s [2024-12-06T10:09:38.712Z] 13251.00 IOPS, 51.76 MiB/s [2024-12-06T10:09:39.647Z] 13308.33 IOPS, 51.99 MiB/s [2024-12-06T10:09:40.581Z] 13439.75 IOPS, 52.50 MiB/s [2024-12-06T10:09:41.538Z] 13485.40 IOPS, 52.68 MiB/s [2024-12-06T10:09:42.476Z] 13478.00 IOPS, 52.65 MiB/s [2024-12-06T10:09:43.414Z] 13570.00 IOPS, 53.01 MiB/s [2024-12-06T10:09:44.352Z] 13564.12 IOPS, 52.98 MiB/s [2024-12-06T10:09:45.739Z] 13594.56 IOPS, 53.10 MiB/s [2024-12-06T10:09:45.739Z] 13601.00 IOPS, 53.13 MiB/s 00:08:12.801 Latency(us) 00:08:12.801 [2024-12-06T10:09:45.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.801 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:12.801 Verification LBA range: start 0x0 length 0x4000 00:08:12.801 NVMe0n1 : 10.05 13632.15 53.25 0.00 0.00 74897.91 16086.11 48377.48 00:08:12.801 [2024-12-06T10:09:45.739Z] =================================================================================================================== 00:08:12.801 [2024-12-06T10:09:45.739Z] Total : 13632.15 53.25 0.00 0.00 74897.91 16086.11 48377.48 00:08:12.801 { 00:08:12.801 "results": [ 00:08:12.801 { 00:08:12.801 "job": "NVMe0n1", 00:08:12.801 "core_mask": "0x1", 00:08:12.801 "workload": "verify", 00:08:12.801 "status": "finished", 00:08:12.801 "verify_range": { 00:08:12.801 "start": 0, 00:08:12.801 "length": 16384 00:08:12.801 }, 00:08:12.801 "queue_depth": 1024, 00:08:12.801 "io_size": 4096, 00:08:12.801 "runtime": 10.052269, 00:08:12.801 "iops": 13632.146135365061, 00:08:12.801 "mibps": 53.25057084126977, 00:08:12.801 "io_failed": 0, 00:08:12.801 "io_timeout": 0, 00:08:12.801 "avg_latency_us": 74897.911688433, 00:08:12.801 "min_latency_us": 16086.10909090909, 00:08:12.801 "max_latency_us": 48377.483636363635 00:08:12.801 } 00:08:12.801 ], 00:08:12.801 "core_count": 1 00:08:12.801 } 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1575591 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1575591 ']' 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1575591 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575591 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575591' 00:08:12.801 killing process with pid 1575591 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1575591 00:08:12.801 Received shutdown signal, test time was about 10.000000 seconds 00:08:12.801 00:08:12.801 Latency(us) 00:08:12.801 [2024-12-06T10:09:45.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.801 [2024-12-06T10:09:45.739Z] =================================================================================================================== 00:08:12.801 [2024-12-06T10:09:45.739Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1575591 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.801 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.802 rmmod nvme_tcp 00:08:12.802 rmmod nvme_fabrics 00:08:12.802 rmmod nvme_keyring 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1575527 ']' 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1575527 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1575527 ']' 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1575527 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575527 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575527' 00:08:12.802 killing process with pid 1575527 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1575527 00:08:12.802 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1575527 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.059 11:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.590 11:09:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.590 00:08:15.590 real 0m20.361s 00:08:15.590 user 0m23.688s 00:08:15.590 sys 0m6.179s 00:08:15.590 11:09:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.590 11:09:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.590 ************************************ 00:08:15.590 END TEST nvmf_queue_depth 00:08:15.590 ************************************ 00:08:15.590 11:09:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.590 11:09:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:15.590 11:09:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.591 11:09:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.591 ************************************ 00:08:15.591 START TEST nvmf_target_multipath 00:08:15.591 ************************************ 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.591 * Looking for test storage... 00:08:15.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:15.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.591 --rc genhtml_branch_coverage=1 00:08:15.591 --rc genhtml_function_coverage=1 00:08:15.591 --rc genhtml_legend=1 00:08:15.591 --rc geninfo_all_blocks=1 00:08:15.591 --rc geninfo_unexecuted_blocks=1 00:08:15.591 00:08:15.591 ' 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:15.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.591 --rc genhtml_branch_coverage=1 00:08:15.591 --rc genhtml_function_coverage=1 00:08:15.591 --rc genhtml_legend=1 00:08:15.591 --rc geninfo_all_blocks=1 00:08:15.591 --rc geninfo_unexecuted_blocks=1 00:08:15.591 00:08:15.591 ' 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:15.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.591 --rc genhtml_branch_coverage=1 00:08:15.591 --rc genhtml_function_coverage=1 00:08:15.591 --rc genhtml_legend=1 00:08:15.591 --rc geninfo_all_blocks=1 00:08:15.591 --rc geninfo_unexecuted_blocks=1 00:08:15.591 00:08:15.591 ' 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:15.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.591 --rc genhtml_branch_coverage=1 00:08:15.591 --rc genhtml_function_coverage=1 00:08:15.591 --rc genhtml_legend=1 00:08:15.591 --rc geninfo_all_blocks=1 00:08:15.591 --rc geninfo_unexecuted_blocks=1 00:08:15.591 00:08:15.591 ' 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.591 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.592 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:22.162 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.162 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:22.163 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:22.163 Found net devices under 0000:af:00.0: cvl_0_0 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:22.163 Found net devices under 0000:af:00.1: cvl_0_1 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.163 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:08:22.163 00:08:22.163 --- 10.0.0.2 ping statistics --- 00:08:22.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.163 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:22.163 00:08:22.163 --- 10.0.0.1 ping statistics --- 00:08:22.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.163 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:22.163 only one NIC for nvmf test 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.163 rmmod nvme_tcp 00:08:22.163 rmmod nvme_fabrics 00:08:22.163 rmmod nvme_keyring 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.163 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.539 00:08:23.539 real 0m8.439s 00:08:23.539 user 0m1.840s 00:08:23.539 sys 0m4.600s 00:08:23.539 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.540 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:23.540 ************************************ 00:08:23.540 END TEST nvmf_target_multipath 00:08:23.540 ************************************ 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.799 ************************************ 00:08:23.799 START TEST nvmf_zcopy 00:08:23.799 ************************************ 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:23.799 * Looking for test storage... 00:08:23.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:23.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.799 --rc genhtml_branch_coverage=1 00:08:23.799 --rc genhtml_function_coverage=1 00:08:23.799 --rc genhtml_legend=1 00:08:23.799 --rc geninfo_all_blocks=1 00:08:23.799 --rc geninfo_unexecuted_blocks=1 00:08:23.799 00:08:23.799 ' 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:23.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.799 --rc genhtml_branch_coverage=1 00:08:23.799 --rc genhtml_function_coverage=1 00:08:23.799 --rc genhtml_legend=1 00:08:23.799 --rc geninfo_all_blocks=1 00:08:23.799 --rc geninfo_unexecuted_blocks=1 00:08:23.799 00:08:23.799 ' 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:23.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.799 --rc genhtml_branch_coverage=1 00:08:23.799 --rc genhtml_function_coverage=1 00:08:23.799 --rc genhtml_legend=1 00:08:23.799 --rc geninfo_all_blocks=1 00:08:23.799 --rc geninfo_unexecuted_blocks=1 00:08:23.799 00:08:23.799 ' 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:23.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.799 --rc genhtml_branch_coverage=1 00:08:23.799 --rc genhtml_function_coverage=1 00:08:23.799 --rc genhtml_legend=1 00:08:23.799 --rc geninfo_all_blocks=1 00:08:23.799 --rc geninfo_unexecuted_blocks=1 00:08:23.799 00:08:23.799 ' 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.799 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.058 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.059 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:24.059 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:24.059 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.059 11:09:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:30.630 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:30.630 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:30.630 Found net devices under 0000:af:00.0: cvl_0_0 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:30.630 Found net devices under 0000:af:00.1: cvl_0_1 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:30.630 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:30.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:08:30.630 00:08:30.630 --- 10.0.0.2 ping statistics --- 00:08:30.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.631 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:30.631 00:08:30.631 --- 10.0.0.1 ping statistics --- 00:08:30.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.631 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1584908 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1584908 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1584908 ']' 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.631 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.631 [2024-12-06 11:10:02.850139] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:08:30.631 [2024-12-06 11:10:02.850180] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.631 [2024-12-06 11:10:02.928235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.631 [2024-12-06 11:10:02.966198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.631 [2024-12-06 11:10:02.966226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.631 [2024-12-06 11:10:02.966233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.631 [2024-12-06 11:10:02.966241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.631 [2024-12-06 11:10:02.966246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.631 [2024-12-06 11:10:02.966789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.890 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.890 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:30.890 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.890 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.890 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.890 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.890 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.891 [2024-12-06 11:10:03.711254] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.891 [2024-12-06 11:10:03.731457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.891 malloc0 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.891 { 00:08:30.891 "params": { 00:08:30.891 "name": "Nvme$subsystem", 00:08:30.891 "trtype": "$TEST_TRANSPORT", 00:08:30.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.891 "adrfam": "ipv4", 00:08:30.891 "trsvcid": "$NVMF_PORT", 00:08:30.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.891 "hdgst": ${hdgst:-false}, 00:08:30.891 "ddgst": ${ddgst:-false} 00:08:30.891 }, 00:08:30.891 "method": "bdev_nvme_attach_controller" 00:08:30.891 } 00:08:30.891 EOF 00:08:30.891 )") 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:30.891 11:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.891 "params": { 00:08:30.891 "name": "Nvme1", 00:08:30.891 "trtype": "tcp", 00:08:30.891 "traddr": "10.0.0.2", 00:08:30.891 "adrfam": "ipv4", 00:08:30.891 "trsvcid": "4420", 00:08:30.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.891 "hdgst": false, 00:08:30.891 "ddgst": false 00:08:30.891 }, 00:08:30.891 "method": "bdev_nvme_attach_controller" 00:08:30.891 }' 00:08:30.891 [2024-12-06 11:10:03.810015] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:08:30.891 [2024-12-06 11:10:03.810054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585188 ] 00:08:31.150 [2024-12-06 11:10:03.882975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.150 [2024-12-06 11:10:03.920973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.409 Running I/O for 10 seconds... 00:08:33.280 9428.00 IOPS, 73.66 MiB/s [2024-12-06T10:10:07.155Z] 9507.00 IOPS, 74.27 MiB/s [2024-12-06T10:10:08.533Z] 9527.00 IOPS, 74.43 MiB/s [2024-12-06T10:10:09.469Z] 9543.75 IOPS, 74.56 MiB/s [2024-12-06T10:10:10.405Z] 9555.60 IOPS, 74.65 MiB/s [2024-12-06T10:10:11.358Z] 9557.00 IOPS, 74.66 MiB/s [2024-12-06T10:10:12.294Z] 9548.86 IOPS, 74.60 MiB/s [2024-12-06T10:10:13.233Z] 9562.38 IOPS, 74.71 MiB/s [2024-12-06T10:10:14.171Z] 9566.44 IOPS, 74.74 MiB/s [2024-12-06T10:10:14.171Z] 9574.30 IOPS, 74.80 MiB/s 00:08:41.233 Latency(us) 00:08:41.233 [2024-12-06T10:10:14.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.233 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:41.233 Verification LBA range: start 0x0 length 0x1000 00:08:41.233 Nvme1n1 : 10.01 9577.72 74.83 0.00 0.00 13326.95 2040.55 22282.24 00:08:41.233 [2024-12-06T10:10:14.171Z] =================================================================================================================== 00:08:41.233 [2024-12-06T10:10:14.171Z] Total : 9577.72 74.83 0.00 0.00 13326.95 2040.55 22282.24 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1587025 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.493 { 00:08:41.493 "params": { 00:08:41.493 "name": "Nvme$subsystem", 00:08:41.493 "trtype": "$TEST_TRANSPORT", 00:08:41.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.493 "adrfam": "ipv4", 00:08:41.493 "trsvcid": "$NVMF_PORT", 00:08:41.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.493 "hdgst": ${hdgst:-false}, 00:08:41.493 "ddgst": ${ddgst:-false} 00:08:41.493 }, 00:08:41.493 "method": "bdev_nvme_attach_controller" 00:08:41.493 } 00:08:41.493 EOF 00:08:41.493 )") 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:41.493 [2024-12-06 11:10:14.316177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.316210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:41.493 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.493 "params": { 00:08:41.493 "name": "Nvme1", 00:08:41.493 "trtype": "tcp", 00:08:41.493 "traddr": "10.0.0.2", 00:08:41.493 "adrfam": "ipv4", 00:08:41.493 "trsvcid": "4420", 00:08:41.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.493 "hdgst": false, 00:08:41.493 "ddgst": false 00:08:41.493 }, 00:08:41.493 "method": "bdev_nvme_attach_controller" 00:08:41.493 }' 00:08:41.493 [2024-12-06 11:10:14.328173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.328186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 [2024-12-06 11:10:14.340202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.340212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 [2024-12-06 11:10:14.352234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.352243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 [2024-12-06 11:10:14.357335] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:08:41.493 [2024-12-06 11:10:14.357376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587025 ] 00:08:41.493 [2024-12-06 11:10:14.364266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.364277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 [2024-12-06 11:10:14.376298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.376307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 [2024-12-06 11:10:14.388329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.388340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 [2024-12-06 11:10:14.400363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.400372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 [2024-12-06 11:10:14.412396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.412406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 [2024-12-06 11:10:14.424428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.493 [2024-12-06 11:10:14.424438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.493 [2024-12-06 11:10:14.430327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.753 [2024-12-06 11:10:14.436458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.753 [2024-12-06 11:10:14.436469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.753 [2024-12-06 11:10:14.448493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.753 [2024-12-06 11:10:14.448506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.753 [2024-12-06 11:10:14.460524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.460536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.469361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.754 [2024-12-06 11:10:14.472555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.472567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.484600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.484617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.496627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.496646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.508657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.508670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.520687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.520699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.532721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.532735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.544748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.544758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.556795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.556817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.568818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.568832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.580852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.580865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.592876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.592885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.604907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.604916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.616939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.616948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.628977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.628990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.754 [2024-12-06 11:10:14.641011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.754 [2024-12-06 11:10:14.641024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.692323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.692342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.701176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.701189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 Running I/O for 5 seconds... 00:08:42.014 [2024-12-06 11:10:14.717295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.717320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.730861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.730879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.743737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.743756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.757198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.757216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.770666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.770690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.783931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.783949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.796969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.796986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.811105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.811123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.014 [2024-12-06 11:10:14.823811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.014 [2024-12-06 11:10:14.823830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.015 [2024-12-06 11:10:14.836650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.015 [2024-12-06 11:10:14.836668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.015 [2024-12-06 11:10:14.850027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.015 [2024-12-06 11:10:14.850045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.015 [2024-12-06 11:10:14.863598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.015 [2024-12-06 11:10:14.863617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.015 [2024-12-06 11:10:14.876533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.015 [2024-12-06 11:10:14.876551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.015 [2024-12-06 11:10:14.889440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.015 [2024-12-06 11:10:14.889457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.015 [2024-12-06 11:10:14.902511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.015 [2024-12-06 11:10:14.902530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.015 [2024-12-06 11:10:14.915144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.015 [2024-12-06 11:10:14.915162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.015 [2024-12-06 11:10:14.927846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.015 [2024-12-06 11:10:14.927864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.015 [2024-12-06 11:10:14.941164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.015 [2024-12-06 11:10:14.941181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:14.954203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:14.954222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:14.967451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:14.967470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:14.980044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:14.980071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:14.993832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:14.993850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:15.006936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:15.006954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:15.019486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:15.019504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:15.032722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:15.032740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:15.045958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:15.045976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:15.059277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:15.059296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:15.073043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:15.073068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.274 [2024-12-06 11:10:15.085506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.274 [2024-12-06 11:10:15.085524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.275 [2024-12-06 11:10:15.098452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.275 [2024-12-06 11:10:15.098471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.275 [2024-12-06 11:10:15.111327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.275 [2024-12-06 11:10:15.111345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.275 [2024-12-06 11:10:15.124074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.275 [2024-12-06 11:10:15.124111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.275 [2024-12-06 11:10:15.137698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.275 [2024-12-06 11:10:15.137716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.275 [2024-12-06 11:10:15.151486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.275 [2024-12-06 11:10:15.151504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.275 [2024-12-06 11:10:15.165449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.275 [2024-12-06 11:10:15.165467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.275 [2024-12-06 11:10:15.178110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.275 [2024-12-06 11:10:15.178128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.275 [2024-12-06 11:10:15.191031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.275 [2024-12-06 11:10:15.191050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.275 [2024-12-06 11:10:15.204666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.275 [2024-12-06 11:10:15.204684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.217451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.217470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.230063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.230081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.242625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.242643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.256137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.256155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.268760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.268778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.281536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.281554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.294583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.294601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.307427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.307444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.321593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.321612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.332956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.332974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.346279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.346297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.359986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.360003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.373408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.373425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.387260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.387277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.400249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.400266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.413156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.413174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.425657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.425674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.438497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.438514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.451154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.451172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.533 [2024-12-06 11:10:15.463627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.533 [2024-12-06 11:10:15.463645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.476817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.476835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.489921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.489939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.503497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.503515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.516463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.516481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.529092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.529109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.542802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.542820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.556004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.556021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.569030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.569047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.581588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.581606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.594754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.594771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.607964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.607982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.621764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.621781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.635559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.635576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.648279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.648296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.661664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.661681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.674190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.674208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.686876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.686893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 [2024-12-06 11:10:15.699540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.792 [2024-12-06 11:10:15.699559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.792 18296.00 IOPS, 142.94 MiB/s [2024-12-06T10:10:15.731Z] [2024-12-06 11:10:15.713165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.793 [2024-12-06 11:10:15.713183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.793 [2024-12-06 11:10:15.726071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.793 [2024-12-06 11:10:15.726090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.739749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.739768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.752556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.752574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.766642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.766666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.780045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.780068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.792569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.792586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.805731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.805748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.819274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.819291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.832282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.832300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.845617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.845635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.858684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.858702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.871872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.871889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.885308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.885325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.898563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.898581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.911435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.911453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.925111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.925130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.938463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.938481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.951326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.951347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.051 [2024-12-06 11:10:15.964992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.051 [2024-12-06 11:10:15.965010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.052 [2024-12-06 11:10:15.978371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.052 [2024-12-06 11:10:15.978389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.310 [2024-12-06 11:10:15.991480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.310 [2024-12-06 11:10:15.991499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.310 [2024-12-06 11:10:16.004133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.310 [2024-12-06 11:10:16.004152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.310 [2024-12-06 11:10:16.017409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.310 [2024-12-06 11:10:16.017426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.031264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.031282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.044129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.044147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.057661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.057679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.071065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.071083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.085240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.085258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.097798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.097816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.111593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.111612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.124191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.124209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.136983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.137002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.149858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.149877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.162911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.162929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.175929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.175947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.188789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.188807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.201342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.201364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.214983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.215001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.228124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.228142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.311 [2024-12-06 11:10:16.240727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.311 [2024-12-06 11:10:16.240745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.253653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.253671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.266274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.266292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.279045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.279070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.291728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.291746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.304698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.304717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.317542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.317559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.331425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.331443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.344805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.344822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.358428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.358447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.372164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.372182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.385332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.385349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.398594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.398612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.411558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.411576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.424605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.424623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.437604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.437622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.451505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.451527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.464612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.464630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.477167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.477185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.490500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.490521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.569 [2024-12-06 11:10:16.503702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.569 [2024-12-06 11:10:16.503721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.826 [2024-12-06 11:10:16.517448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.826 [2024-12-06 11:10:16.517467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.826 [2024-12-06 11:10:16.530675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.826 [2024-12-06 11:10:16.530693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.826 [2024-12-06 11:10:16.544502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.826 [2024-12-06 11:10:16.544520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.826 [2024-12-06 11:10:16.557553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.826 [2024-12-06 11:10:16.557572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.826 [2024-12-06 11:10:16.570114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.826 [2024-12-06 11:10:16.570132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.826 [2024-12-06 11:10:16.582963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.826 [2024-12-06 11:10:16.582981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.826 [2024-12-06 11:10:16.596146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.826 [2024-12-06 11:10:16.596164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.826 [2024-12-06 11:10:16.609467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.609484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.622470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.622488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.635761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.635779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.648272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.648290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.661253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.661270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.674280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.674297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.688014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.688033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.700653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.700671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 18356.00 IOPS, 143.41 MiB/s [2024-12-06T10:10:16.765Z] [2024-12-06 11:10:16.713415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.713433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.726607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.726625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.740376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.740395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.827 [2024-12-06 11:10:16.753403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.827 [2024-12-06 11:10:16.753423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.766643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.766662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.779415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.779433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.792099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.792117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.804954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.804972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.818469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.818487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.832179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.832197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.845824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.845841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.858922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.858941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.871916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.871934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.885620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.885637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.898436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.898453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.911258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.911275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.923850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.923868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.937400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.937418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.950710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.950728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.964243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.964260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.977984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.978002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:16.991545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:16.991563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:17.004427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:17.004445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.084 [2024-12-06 11:10:17.018130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.084 [2024-12-06 11:10:17.018148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.031657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.031675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.045389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.045406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.058585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.058602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.071781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.071798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.084708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.084726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.097786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.097803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.110634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.110652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.123953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.123970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.137138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.137155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.150654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.150672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.163375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.163393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.176869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.176887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.190175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.190193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.203408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.203426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.216355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.216372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.229502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.229519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.242138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.242156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.254933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.254951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.341 [2024-12-06 11:10:17.268160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.341 [2024-12-06 11:10:17.268178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.281347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.281366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.295110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.295128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.307754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.307772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.320892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.320911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.333823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.333841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.347403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.347421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.361162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.361179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.374121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.374139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.387670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.387687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.401030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.401047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.415014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.415032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.427925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.427943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.440647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.440669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.453400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.453417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.466364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.466382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.479779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.479796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.493373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.493390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.506887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.506904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.519937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.519955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.599 [2024-12-06 11:10:17.532743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.599 [2024-12-06 11:10:17.532761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.546518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.546537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.559229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.559257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.571766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.571784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.584958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.584976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.598445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.598463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.610986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.611005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.623936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.623955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.637480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.637498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.651071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.651090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.665182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.665202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.678794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.678814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.692235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.692257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.705592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.705609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 18334.00 IOPS, 143.23 MiB/s [2024-12-06T10:10:17.795Z] [2024-12-06 11:10:17.719524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.857 [2024-12-06 11:10:17.719542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.857 [2024-12-06 11:10:17.732369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.858 [2024-12-06 11:10:17.732386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.858 [2024-12-06 11:10:17.746571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.858 [2024-12-06 11:10:17.746588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.858 [2024-12-06 11:10:17.759646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.858 [2024-12-06 11:10:17.759665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.858 [2024-12-06 11:10:17.773304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.858 [2024-12-06 11:10:17.773323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.858 [2024-12-06 11:10:17.786366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.858 [2024-12-06 11:10:17.786384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.116 [2024-12-06 11:10:17.800258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.116 [2024-12-06 11:10:17.800277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.116 [2024-12-06 11:10:17.813014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.116 [2024-12-06 11:10:17.813032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.116 [2024-12-06 11:10:17.826972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.116 [2024-12-06 11:10:17.826990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.116 [2024-12-06 11:10:17.840740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.116 [2024-12-06 11:10:17.840758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.116 [2024-12-06 11:10:17.854206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.854224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.867868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.867886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.880915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.880933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.894015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.894033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.907108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.907125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.920533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.920551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.933984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.934003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.947259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.947282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.960180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.960199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.972923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.972940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:17.986959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:17.986977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:18.000449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:18.000467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:18.013371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:18.013389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:18.026414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:18.026432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:18.040157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:18.040174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.117 [2024-12-06 11:10:18.053780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.117 [2024-12-06 11:10:18.053798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.067607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.067625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.080961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.080979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.094413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.094430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.107484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.107502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.120686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.120704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.134361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.134378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.147770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.147788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.161409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.161426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.174712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.174730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.188267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.188285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.201745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.201763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.214512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.214531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.227771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.227789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.240901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.240919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.254420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.254437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.268347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.268366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.281455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.281473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.294447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.294464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.375 [2024-12-06 11:10:18.307443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.375 [2024-12-06 11:10:18.307461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.320278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.320296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.333441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.333459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.346270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.346288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.359686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.359703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.372766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.372783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.385543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.385560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.399309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.399326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.412799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.412816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.426517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.426535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.439652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.439669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.453175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.453193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.465779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.465797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.479072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.479090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.492851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.492868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.506227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.506244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.519329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.519347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.532025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.532042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.545477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.545495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.632 [2024-12-06 11:10:18.559180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.632 [2024-12-06 11:10:18.559197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.888 [2024-12-06 11:10:18.572789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.888 [2024-12-06 11:10:18.572808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.888 [2024-12-06 11:10:18.586046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.888 [2024-12-06 11:10:18.586069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.888 [2024-12-06 11:10:18.599558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.888 [2024-12-06 11:10:18.599575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.888 [2024-12-06 11:10:18.613095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.888 [2024-12-06 11:10:18.613113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.888 [2024-12-06 11:10:18.626295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.888 [2024-12-06 11:10:18.626313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.888 [2024-12-06 11:10:18.639422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.888 [2024-12-06 11:10:18.639440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.888 [2024-12-06 11:10:18.651942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.888 [2024-12-06 11:10:18.651960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.888 [2024-12-06 11:10:18.665665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.888 [2024-12-06 11:10:18.665683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.888 [2024-12-06 11:10:18.679207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.888 [2024-12-06 11:10:18.679225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.692281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.692299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.705826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.705844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 18339.50 IOPS, 143.28 MiB/s [2024-12-06T10:10:18.827Z] [2024-12-06 11:10:18.719119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.719136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.732685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.732703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.745543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.745560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.758805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.758823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.771938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.771955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.785466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.785484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.798533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.798553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.812314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.812332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.889 [2024-12-06 11:10:18.825367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.889 [2024-12-06 11:10:18.825384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.838226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.838244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.852103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.852120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.865546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.865563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.878889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.878906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.892589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.892607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.905840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.905858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.919614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.919632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.932804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.932821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.945298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.945320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.958029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.958047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.971719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.971737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.984694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.984711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:18.997729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:18.997746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:19.010746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:19.010763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:19.024561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:19.024579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:19.037784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:19.037803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:19.051206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:19.051225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:19.064731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:19.064750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.146 [2024-12-06 11:10:19.078050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.146 [2024-12-06 11:10:19.078075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.091592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.091610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.104645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.104662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.117184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.117203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.130775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.130794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.143951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.143969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.157304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.157322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.170559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.170577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.182996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.183014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.195644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.195667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.208476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.208494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.221742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.221761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.234944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.234962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.247805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.247823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.260782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.260799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.274388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.274406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.288154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.288172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.301047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.301071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.313688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.313706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.327149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.327167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.403 [2024-12-06 11:10:19.340740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.403 [2024-12-06 11:10:19.340758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.353933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.353952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.367178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.367196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.379891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.379909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.392838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.392857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.406156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.406174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.419580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.419598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.432872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.432890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.445496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.445519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.458266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.458284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.471610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.471627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.484705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.484723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.497722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.497740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.510711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.510729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.523690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.523707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.536367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.536385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.549180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.549197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.562440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.562458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.575510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.575528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.662 [2024-12-06 11:10:19.588517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.662 [2024-12-06 11:10:19.588535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.601484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.601502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.614484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.614502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.627916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.627933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.641144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.641161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.654307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.654336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.667335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.667353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.680324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.680342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.693200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.693222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.706589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.706607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.719681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.719698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 18349.60 IOPS, 143.36 MiB/s 00:08:46.921 Latency(us) 00:08:46.921 [2024-12-06T10:10:19.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.921 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:46.921 Nvme1n1 : 5.01 18351.17 143.37 0.00 0.00 6968.73 2934.23 14656.23 00:08:46.921 [2024-12-06T10:10:19.859Z] =================================================================================================================== 00:08:46.921 [2024-12-06T10:10:19.859Z] Total : 18351.17 143.37 0.00 0.00 6968.73 2934.23 14656.23 00:08:46.921 [2024-12-06 11:10:19.729397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.729413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.741428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.741441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.753468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.753484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.765498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.765514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.777527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.777539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.789557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.789569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.801589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.801601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.813621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.813633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.825653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.825665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.837685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.837708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.921 [2024-12-06 11:10:19.849714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.921 [2024-12-06 11:10:19.849724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.179 [2024-12-06 11:10:19.861761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.179 [2024-12-06 11:10:19.861774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.179 [2024-12-06 11:10:19.873786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.179 [2024-12-06 11:10:19.873796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.179 [2024-12-06 11:10:19.885819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.179 [2024-12-06 11:10:19.885828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1587025) - No such process 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1587025 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 delay0 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.179 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:47.179 [2024-12-06 11:10:20.035994] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:53.737 Initializing NVMe Controllers 00:08:53.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:53.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:53.737 Initialization complete. Launching workers. 00:08:53.737 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 263, failed: 16880 00:08:53.737 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17039, failed to submit 104 00:08:53.737 success 16957, unsuccessful 82, failed 0 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.737 rmmod nvme_tcp 00:08:53.737 rmmod nvme_fabrics 00:08:53.737 rmmod nvme_keyring 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1584908 ']' 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1584908 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1584908 ']' 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1584908 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1584908 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1584908' 00:08:53.737 killing process with pid 1584908 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1584908 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1584908 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.737 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.643 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.643 00:08:55.643 real 0m32.017s 00:08:55.643 user 0m42.008s 00:08:55.643 sys 0m11.777s 00:08:55.643 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.643 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.643 ************************************ 00:08:55.643 END TEST nvmf_zcopy 00:08:55.643 ************************************ 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.903 ************************************ 00:08:55.903 START TEST nvmf_nmic 00:08:55.903 ************************************ 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:55.903 * Looking for test storage... 00:08:55.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.903 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:55.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.903 --rc genhtml_branch_coverage=1 00:08:55.903 --rc genhtml_function_coverage=1 00:08:55.904 --rc genhtml_legend=1 00:08:55.904 --rc geninfo_all_blocks=1 00:08:55.904 --rc geninfo_unexecuted_blocks=1 00:08:55.904 00:08:55.904 ' 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.904 --rc genhtml_branch_coverage=1 00:08:55.904 --rc genhtml_function_coverage=1 00:08:55.904 --rc genhtml_legend=1 00:08:55.904 --rc geninfo_all_blocks=1 00:08:55.904 --rc geninfo_unexecuted_blocks=1 00:08:55.904 00:08:55.904 ' 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.904 --rc genhtml_branch_coverage=1 00:08:55.904 --rc genhtml_function_coverage=1 00:08:55.904 --rc genhtml_legend=1 00:08:55.904 --rc geninfo_all_blocks=1 00:08:55.904 --rc geninfo_unexecuted_blocks=1 00:08:55.904 00:08:55.904 ' 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.904 --rc genhtml_branch_coverage=1 00:08:55.904 --rc genhtml_function_coverage=1 00:08:55.904 --rc genhtml_legend=1 00:08:55.904 --rc geninfo_all_blocks=1 00:08:55.904 --rc geninfo_unexecuted_blocks=1 00:08:55.904 00:08:55.904 ' 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.904 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.164 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.164 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.164 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.164 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.164 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.164 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:02.732 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:02.732 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:02.732 Found net devices under 0000:af:00.0: cvl_0_0 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:02.732 Found net devices under 0000:af:00.1: cvl_0_1 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.732 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:02.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:09:02.733 00:09:02.733 --- 10.0.0.2 ping statistics --- 00:09:02.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.733 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:09:02.733 00:09:02.733 --- 10.0.0.1 ping statistics --- 00:09:02.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.733 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1592863 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1592863 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1592863 ']' 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.733 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.733 [2024-12-06 11:10:34.954264] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:09:02.733 [2024-12-06 11:10:34.954326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.733 [2024-12-06 11:10:35.031264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.733 [2024-12-06 11:10:35.072834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.733 [2024-12-06 11:10:35.072871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.733 [2024-12-06 11:10:35.072877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.733 [2024-12-06 11:10:35.072883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.733 [2024-12-06 11:10:35.072887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.733 [2024-12-06 11:10:35.074301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.733 [2024-12-06 11:10:35.074411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.733 [2024-12-06 11:10:35.074526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.733 [2024-12-06 11:10:35.074526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 [2024-12-06 11:10:35.821089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 Malloc0 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 [2024-12-06 11:10:35.880435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:02.992 test case1: single bdev can't be used in multiple subsystems 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 [2024-12-06 11:10:35.908353] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:02.992 [2024-12-06 11:10:35.908372] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:02.992 [2024-12-06 11:10:35.908378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.992 request: 00:09:02.992 { 00:09:02.992 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:02.992 "namespace": { 00:09:02.992 "bdev_name": "Malloc0", 00:09:02.992 "no_auto_visible": false, 00:09:02.992 "hide_metadata": false 00:09:02.992 }, 00:09:02.992 "method": "nvmf_subsystem_add_ns", 00:09:02.992 "req_id": 1 00:09:02.992 } 00:09:02.992 Got JSON-RPC error response 00:09:02.992 response: 00:09:02.992 { 00:09:02.992 "code": -32602, 00:09:02.992 "message": "Invalid parameters" 00:09:02.992 } 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:02.992 Adding namespace failed - expected result. 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:02.992 test case2: host connect to nvmf target in multiple paths 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 [2024-12-06 11:10:35.920483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.992 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.411 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:05.790 11:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.790 11:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:05.790 11:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.790 11:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:05.790 11:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:07.695 11:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:07.695 11:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:07.695 11:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.695 11:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:07.695 11:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.695 11:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:07.695 11:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:07.695 [global] 00:09:07.695 thread=1 00:09:07.695 invalidate=1 00:09:07.695 rw=write 00:09:07.695 time_based=1 00:09:07.695 runtime=1 00:09:07.695 ioengine=libaio 00:09:07.695 direct=1 00:09:07.695 bs=4096 00:09:07.695 iodepth=1 00:09:07.695 norandommap=0 00:09:07.695 numjobs=1 00:09:07.695 00:09:07.695 verify_dump=1 00:09:07.695 verify_backlog=512 00:09:07.695 verify_state_save=0 00:09:07.695 do_verify=1 00:09:07.695 verify=crc32c-intel 00:09:07.695 [job0] 00:09:07.695 filename=/dev/nvme0n1 00:09:07.695 Could not set queue depth (nvme0n1) 00:09:08.260 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.260 fio-3.35 00:09:08.260 Starting 1 thread 00:09:09.195 00:09:09.195 job0: (groupid=0, jobs=1): err= 0: pid=1594093: Fri Dec 6 11:10:42 2024 00:09:09.195 read: IOPS=22, BW=90.1KiB/s (92.3kB/s)(92.0KiB/1021msec) 00:09:09.195 slat (nsec): min=9744, max=25565, avg=21875.43, stdev=2821.43 00:09:09.195 clat (usec): min=40759, max=41094, avg=40956.06, stdev=69.33 00:09:09.195 lat (usec): min=40769, max=41116, avg=40977.94, stdev=70.85 00:09:09.195 clat percentiles (usec): 00:09:09.195 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:09.195 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:09.195 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:09.195 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:09.195 | 99.99th=[41157] 00:09:09.195 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:09.195 slat (nsec): min=10077, max=39797, avg=11276.58, stdev=2049.02 00:09:09.195 clat (usec): min=120, max=309, avg=137.85, stdev=19.45 00:09:09.195 lat (usec): min=130, max=349, avg=149.12, stdev=20.36 00:09:09.195 clat percentiles (usec): 00:09:09.195 | 1.00th=[ 122], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 126], 00:09:09.195 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 133], 00:09:09.195 | 70.00th=[ 135], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 178], 00:09:09.195 | 99.00th=[ 190], 99.50th=[ 212], 99.90th=[ 310], 99.95th=[ 310], 00:09:09.195 | 99.99th=[ 310] 00:09:09.195 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:09.195 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:09.195 lat (usec) : 250=95.51%, 500=0.19% 00:09:09.195 lat (msec) : 50=4.30% 00:09:09.195 cpu : usr=0.59%, sys=0.69%, ctx=535, majf=0, minf=1 00:09:09.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.195 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.195 00:09:09.195 Run status group 0 (all jobs): 00:09:09.195 READ: bw=90.1KiB/s (92.3kB/s), 90.1KiB/s-90.1KiB/s (92.3kB/s-92.3kB/s), io=92.0KiB (94.2kB), run=1021-1021msec 00:09:09.195 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:09:09.195 00:09:09.195 Disk stats (read/write): 00:09:09.195 nvme0n1: ios=70/512, merge=0/0, ticks=840/71, in_queue=911, util=91.28% 00:09:09.195 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.455 rmmod nvme_tcp 00:09:09.455 rmmod nvme_fabrics 00:09:09.455 rmmod nvme_keyring 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1592863 ']' 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1592863 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1592863 ']' 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1592863 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1592863 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1592863' 00:09:09.455 killing process with pid 1592863 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1592863 00:09:09.455 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1592863 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.714 11:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.254 00:09:12.254 real 0m15.968s 00:09:12.254 user 0m40.268s 00:09:12.254 sys 0m5.370s 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:12.254 ************************************ 00:09:12.254 END TEST nvmf_nmic 00:09:12.254 ************************************ 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.254 ************************************ 00:09:12.254 START TEST nvmf_fio_target 00:09:12.254 ************************************ 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:12.254 * Looking for test storage... 00:09:12.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:12.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.254 --rc genhtml_branch_coverage=1 00:09:12.254 --rc genhtml_function_coverage=1 00:09:12.254 --rc genhtml_legend=1 00:09:12.254 --rc geninfo_all_blocks=1 00:09:12.254 --rc geninfo_unexecuted_blocks=1 00:09:12.254 00:09:12.254 ' 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:12.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.254 --rc genhtml_branch_coverage=1 00:09:12.254 --rc genhtml_function_coverage=1 00:09:12.254 --rc genhtml_legend=1 00:09:12.254 --rc geninfo_all_blocks=1 00:09:12.254 --rc geninfo_unexecuted_blocks=1 00:09:12.254 00:09:12.254 ' 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:12.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.254 --rc genhtml_branch_coverage=1 00:09:12.254 --rc genhtml_function_coverage=1 00:09:12.254 --rc genhtml_legend=1 00:09:12.254 --rc geninfo_all_blocks=1 00:09:12.254 --rc geninfo_unexecuted_blocks=1 00:09:12.254 00:09:12.254 ' 00:09:12.254 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:12.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.254 --rc genhtml_branch_coverage=1 00:09:12.254 --rc genhtml_function_coverage=1 00:09:12.254 --rc genhtml_legend=1 00:09:12.254 --rc geninfo_all_blocks=1 00:09:12.255 --rc geninfo_unexecuted_blocks=1 00:09:12.255 00:09:12.255 ' 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:12.255 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.828 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.828 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.828 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.828 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.828 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.828 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.828 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:18.829 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:18.829 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:18.829 Found net devices under 0000:af:00.0: cvl_0_0 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:18.829 Found net devices under 0000:af:00.1: cvl_0_1 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:09:18.829 00:09:18.829 --- 10.0.0.2 ping statistics --- 00:09:18.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.829 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:09:18.829 00:09:18.829 --- 10.0.0.1 ping statistics --- 00:09:18.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.829 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.829 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1598096 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1598096 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1598096 ']' 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.830 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.830 [2024-12-06 11:10:50.996960] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:09:18.830 [2024-12-06 11:10:50.996998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.830 [2024-12-06 11:10:51.073620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.830 [2024-12-06 11:10:51.111446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.830 [2024-12-06 11:10:51.111481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.830 [2024-12-06 11:10:51.111487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.830 [2024-12-06 11:10:51.111493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.830 [2024-12-06 11:10:51.111497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.830 [2024-12-06 11:10:51.112967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.830 [2024-12-06 11:10:51.113108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.830 [2024-12-06 11:10:51.113159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.830 [2024-12-06 11:10:51.113161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.089 11:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.089 11:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:19.089 11:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:19.089 11:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.089 11:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.089 11:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.089 11:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:19.089 [2024-12-06 11:10:52.016223] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.347 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.347 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:19.347 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.605 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:19.606 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.901 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:19.901 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.901 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:19.901 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:20.159 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.418 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:20.418 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.677 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:20.677 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.936 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:20.936 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:20.936 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:21.194 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:21.194 11:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:21.453 11:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:21.453 11:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.453 11:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.712 [2024-12-06 11:10:54.531747] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.712 11:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:21.975 11:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:22.235 11:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.613 11:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:23.613 11:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:23.613 11:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.613 11:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:23.613 11:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:23.613 11:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:25.523 11:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:25.523 11:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:25.523 11:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.523 11:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:25.523 11:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.523 11:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:25.523 11:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:25.523 [global] 00:09:25.523 thread=1 00:09:25.523 invalidate=1 00:09:25.523 rw=write 00:09:25.523 time_based=1 00:09:25.523 runtime=1 00:09:25.523 ioengine=libaio 00:09:25.523 direct=1 00:09:25.523 bs=4096 00:09:25.523 iodepth=1 00:09:25.523 norandommap=0 00:09:25.523 numjobs=1 00:09:25.523 00:09:25.523 verify_dump=1 00:09:25.523 verify_backlog=512 00:09:25.523 verify_state_save=0 00:09:25.523 do_verify=1 00:09:25.523 verify=crc32c-intel 00:09:25.523 [job0] 00:09:25.523 filename=/dev/nvme0n1 00:09:25.523 [job1] 00:09:25.523 filename=/dev/nvme0n2 00:09:25.523 [job2] 00:09:25.523 filename=/dev/nvme0n3 00:09:25.523 [job3] 00:09:25.523 filename=/dev/nvme0n4 00:09:25.523 Could not set queue depth (nvme0n1) 00:09:25.523 Could not set queue depth (nvme0n2) 00:09:25.523 Could not set queue depth (nvme0n3) 00:09:25.523 Could not set queue depth (nvme0n4) 00:09:25.783 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.783 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.783 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.783 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.783 fio-3.35 00:09:25.783 Starting 4 threads 00:09:27.166 00:09:27.166 job0: (groupid=0, jobs=1): err= 0: pid=1599631: Fri Dec 6 11:10:59 2024 00:09:27.166 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:27.166 slat (nsec): min=2691, max=19837, avg=4889.97, stdev=2209.61 00:09:27.166 clat (usec): min=162, max=587, avg=219.39, stdev=42.96 00:09:27.166 lat (usec): min=165, max=591, avg=224.28, stdev=42.58 00:09:27.166 clat percentiles (usec): 00:09:27.166 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 194], 00:09:27.166 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:09:27.166 | 70.00th=[ 225], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 269], 00:09:27.166 | 99.00th=[ 453], 99.50th=[ 506], 99.90th=[ 529], 99.95th=[ 537], 00:09:27.166 | 99.99th=[ 586] 00:09:27.166 write: IOPS=2885, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:09:27.166 slat (nsec): min=3636, max=37279, avg=6438.76, stdev=3018.85 00:09:27.166 clat (usec): min=101, max=365, avg=138.22, stdev=17.12 00:09:27.166 lat (usec): min=107, max=394, avg=144.66, stdev=17.60 00:09:27.166 clat percentiles (usec): 00:09:27.166 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 126], 00:09:27.166 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:09:27.166 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 169], 00:09:27.166 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 241], 99.95th=[ 260], 00:09:27.166 | 99.99th=[ 367] 00:09:27.166 bw ( KiB/s): min=12288, max=12288, per=49.41%, avg=12288.00, stdev= 0.00, samples=1 00:09:27.166 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:27.166 lat (usec) : 250=92.86%, 500=6.88%, 750=0.26% 00:09:27.166 cpu : usr=1.60%, sys=3.10%, ctx=5450, majf=0, minf=1 00:09:27.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.166 issued rwts: total=2560,2888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.166 job1: (groupid=0, jobs=1): err= 0: pid=1599632: Fri Dec 6 11:10:59 2024 00:09:27.166 read: IOPS=2110, BW=8444KiB/s (8646kB/s)(8452KiB/1001msec) 00:09:27.166 slat (nsec): min=7444, max=50228, avg=8662.74, stdev=1874.55 00:09:27.166 clat (usec): min=171, max=1062, avg=228.44, stdev=37.75 00:09:27.166 lat (usec): min=182, max=1072, avg=237.10, stdev=37.76 00:09:27.166 clat percentiles (usec): 00:09:27.166 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:09:27.166 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:09:27.166 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 262], 00:09:27.167 | 99.00th=[ 281], 99.50th=[ 457], 99.90th=[ 766], 99.95th=[ 799], 00:09:27.167 | 99.99th=[ 1057] 00:09:27.167 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:27.167 slat (nsec): min=10921, max=56664, avg=12396.29, stdev=1767.80 00:09:27.167 clat (usec): min=120, max=308, avg=175.49, stdev=38.46 00:09:27.167 lat (usec): min=133, max=365, avg=187.88, stdev=38.49 00:09:27.167 clat percentiles (usec): 00:09:27.167 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:09:27.167 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 167], 00:09:27.167 | 70.00th=[ 176], 80.00th=[ 190], 90.00th=[ 255], 95.00th=[ 265], 00:09:27.167 | 99.00th=[ 277], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:09:27.167 | 99.99th=[ 310] 00:09:27.167 bw ( KiB/s): min=11208, max=11208, per=45.07%, avg=11208.00, stdev= 0.00, samples=1 00:09:27.167 iops : min= 2802, max= 2802, avg=2802.00, stdev= 0.00, samples=1 00:09:27.167 lat (usec) : 250=86.48%, 500=13.40%, 750=0.04%, 1000=0.06% 00:09:27.167 lat (msec) : 2=0.02% 00:09:27.167 cpu : usr=3.70%, sys=8.00%, ctx=4675, majf=0, minf=1 00:09:27.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.167 issued rwts: total=2113,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.167 job2: (groupid=0, jobs=1): err= 0: pid=1599633: Fri Dec 6 11:10:59 2024 00:09:27.167 read: IOPS=23, BW=92.2KiB/s (94.4kB/s)(96.0KiB/1041msec) 00:09:27.167 slat (nsec): min=10164, max=37646, avg=23792.46, stdev=5496.54 00:09:27.167 clat (usec): min=327, max=41115, avg=39235.80, stdev=8289.15 00:09:27.167 lat (usec): min=351, max=41137, avg=39259.59, stdev=8289.14 00:09:27.167 clat percentiles (usec): 00:09:27.167 | 1.00th=[ 326], 5.00th=[40109], 10.00th=[40633], 20.00th=[41157], 00:09:27.167 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:27.167 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:27.167 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:27.167 | 99.99th=[41157] 00:09:27.167 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:09:27.167 slat (nsec): min=10971, max=42461, avg=12660.38, stdev=2050.19 00:09:27.167 clat (usec): min=140, max=336, avg=170.13, stdev=16.52 00:09:27.167 lat (usec): min=152, max=348, avg=182.79, stdev=16.93 00:09:27.167 clat percentiles (usec): 00:09:27.167 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:27.167 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:09:27.167 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 196], 00:09:27.167 | 99.00th=[ 223], 99.50th=[ 237], 99.90th=[ 338], 99.95th=[ 338], 00:09:27.167 | 99.99th=[ 338] 00:09:27.167 bw ( KiB/s): min= 4096, max= 4096, per=16.47%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.167 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.167 lat (usec) : 250=95.34%, 500=0.37% 00:09:27.167 lat (msec) : 50=4.29% 00:09:27.167 cpu : usr=0.19%, sys=1.15%, ctx=537, majf=0, minf=1 00:09:27.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.167 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.167 job3: (groupid=0, jobs=1): err= 0: pid=1599634: Fri Dec 6 11:10:59 2024 00:09:27.167 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:09:27.167 slat (nsec): min=12077, max=25806, avg=22171.77, stdev=2380.27 00:09:27.167 clat (usec): min=40838, max=41200, avg=40979.67, stdev=78.14 00:09:27.167 lat (usec): min=40860, max=41212, avg=41001.84, stdev=76.89 00:09:27.167 clat percentiles (usec): 00:09:27.167 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:27.167 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:27.167 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:27.167 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:27.167 | 99.99th=[41157] 00:09:27.167 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:27.167 slat (usec): min=10, max=854, avg=15.44, stdev=37.26 00:09:27.167 clat (usec): min=130, max=818, avg=206.68, stdev=50.47 00:09:27.167 lat (usec): min=143, max=1111, avg=222.12, stdev=63.99 00:09:27.167 clat percentiles (usec): 00:09:27.167 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 165], 00:09:27.167 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 208], 60.00th=[ 233], 00:09:27.167 | 70.00th=[ 237], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 245], 00:09:27.167 | 99.00th=[ 293], 99.50th=[ 494], 99.90th=[ 816], 99.95th=[ 816], 00:09:27.167 | 99.99th=[ 816] 00:09:27.167 bw ( KiB/s): min= 4096, max= 4096, per=16.47%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.167 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.167 lat (usec) : 250=92.32%, 500=3.18%, 750=0.19%, 1000=0.19% 00:09:27.167 lat (msec) : 50=4.12% 00:09:27.167 cpu : usr=0.39%, sys=1.08%, ctx=536, majf=0, minf=1 00:09:27.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.167 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.167 00:09:27.167 Run status group 0 (all jobs): 00:09:27.167 READ: bw=17.7MiB/s (18.6MB/s), 86.2KiB/s-9.99MiB/s (88.3kB/s-10.5MB/s), io=18.4MiB (19.3MB), run=1001-1041msec 00:09:27.167 WRITE: bw=24.3MiB/s (25.5MB/s), 1967KiB/s-11.3MiB/s (2015kB/s-11.8MB/s), io=25.3MiB (26.5MB), run=1001-1041msec 00:09:27.167 00:09:27.167 Disk stats (read/write): 00:09:27.167 nvme0n1: ios=2163/2560, merge=0/0, ticks=508/352, in_queue=860, util=86.97% 00:09:27.167 nvme0n2: ios=1980/2048, merge=0/0, ticks=833/338, in_queue=1171, util=90.35% 00:09:27.167 nvme0n3: ios=76/512, merge=0/0, ticks=1219/84, in_queue=1303, util=93.12% 00:09:27.167 nvme0n4: ios=94/512, merge=0/0, ticks=831/101, in_queue=932, util=95.59% 00:09:27.167 11:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:27.167 [global] 00:09:27.167 thread=1 00:09:27.167 invalidate=1 00:09:27.167 rw=randwrite 00:09:27.167 time_based=1 00:09:27.167 runtime=1 00:09:27.167 ioengine=libaio 00:09:27.167 direct=1 00:09:27.167 bs=4096 00:09:27.167 iodepth=1 00:09:27.167 norandommap=0 00:09:27.167 numjobs=1 00:09:27.167 00:09:27.167 verify_dump=1 00:09:27.167 verify_backlog=512 00:09:27.167 verify_state_save=0 00:09:27.167 do_verify=1 00:09:27.167 verify=crc32c-intel 00:09:27.167 [job0] 00:09:27.167 filename=/dev/nvme0n1 00:09:27.167 [job1] 00:09:27.167 filename=/dev/nvme0n2 00:09:27.167 [job2] 00:09:27.167 filename=/dev/nvme0n3 00:09:27.167 [job3] 00:09:27.167 filename=/dev/nvme0n4 00:09:27.167 Could not set queue depth (nvme0n1) 00:09:27.167 Could not set queue depth (nvme0n2) 00:09:27.168 Could not set queue depth (nvme0n3) 00:09:27.168 Could not set queue depth (nvme0n4) 00:09:27.427 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.427 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.427 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.427 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.427 fio-3.35 00:09:27.427 Starting 4 threads 00:09:28.802 00:09:28.802 job0: (groupid=0, jobs=1): err= 0: pid=1600059: Fri Dec 6 11:11:01 2024 00:09:28.802 read: IOPS=2362, BW=9451KiB/s (9677kB/s)(9460KiB/1001msec) 00:09:28.802 slat (nsec): min=7123, max=42728, avg=8274.47, stdev=1298.19 00:09:28.802 clat (usec): min=179, max=1925, avg=228.05, stdev=52.76 00:09:28.802 lat (usec): min=187, max=1935, avg=236.32, stdev=52.80 00:09:28.802 clat percentiles (usec): 00:09:28.802 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:09:28.802 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:09:28.802 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:09:28.802 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 562], 99.95th=[ 1827], 00:09:28.802 | 99.99th=[ 1926] 00:09:28.802 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:28.802 slat (nsec): min=10014, max=51517, avg=11122.57, stdev=1654.65 00:09:28.802 clat (usec): min=115, max=257, avg=155.60, stdev=18.21 00:09:28.802 lat (usec): min=126, max=269, avg=166.73, stdev=18.41 00:09:28.802 clat percentiles (usec): 00:09:28.802 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:09:28.802 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:09:28.802 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 188], 00:09:28.802 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 247], 99.95th=[ 258], 00:09:28.802 | 99.99th=[ 258] 00:09:28.802 bw ( KiB/s): min=11536, max=11536, per=36.53%, avg=11536.00, stdev= 0.00, samples=1 00:09:28.802 iops : min= 2884, max= 2884, avg=2884.00, stdev= 0.00, samples=1 00:09:28.802 lat (usec) : 250=94.66%, 500=5.26%, 750=0.04% 00:09:28.802 lat (msec) : 2=0.04% 00:09:28.802 cpu : usr=4.20%, sys=7.60%, ctx=4925, majf=0, minf=2 00:09:28.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.802 issued rwts: total=2365,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.802 job1: (groupid=0, jobs=1): err= 0: pid=1600068: Fri Dec 6 11:11:01 2024 00:09:28.802 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:28.802 slat (nsec): min=6801, max=23058, avg=7976.83, stdev=1262.47 00:09:28.802 clat (usec): min=179, max=684, avg=264.85, stdev=45.82 00:09:28.802 lat (usec): min=187, max=707, avg=272.83, stdev=46.08 00:09:28.802 clat percentiles (usec): 00:09:28.802 | 1.00th=[ 206], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:09:28.802 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:09:28.802 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 330], 00:09:28.802 | 99.00th=[ 474], 99.50th=[ 486], 99.90th=[ 660], 99.95th=[ 668], 00:09:28.802 | 99.99th=[ 685] 00:09:28.802 write: IOPS=2499, BW=9998KiB/s (10.2MB/s)(9.77MiB/1001msec); 0 zone resets 00:09:28.802 slat (nsec): min=9479, max=38734, avg=10966.28, stdev=1667.84 00:09:28.802 clat (usec): min=115, max=370, avg=161.39, stdev=26.59 00:09:28.802 lat (usec): min=127, max=409, avg=172.35, stdev=26.83 00:09:28.802 clat percentiles (usec): 00:09:28.802 | 1.00th=[ 125], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 141], 00:09:28.802 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:09:28.802 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 200], 95.00th=[ 219], 00:09:28.802 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 285], 99.95th=[ 318], 00:09:28.802 | 99.99th=[ 371] 00:09:28.802 bw ( KiB/s): min= 9472, max= 9472, per=29.99%, avg=9472.00, stdev= 0.00, samples=1 00:09:28.802 iops : min= 2368, max= 2368, avg=2368.00, stdev= 0.00, samples=1 00:09:28.802 lat (usec) : 250=70.99%, 500=28.81%, 750=0.20% 00:09:28.802 cpu : usr=1.80%, sys=5.00%, ctx=4553, majf=0, minf=1 00:09:28.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.802 issued rwts: total=2048,2502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.802 job2: (groupid=0, jobs=1): err= 0: pid=1600084: Fri Dec 6 11:11:01 2024 00:09:28.802 read: IOPS=23, BW=94.8KiB/s (97.0kB/s)(96.0KiB/1013msec) 00:09:28.802 slat (nsec): min=10024, max=30154, avg=22123.71, stdev=4722.38 00:09:28.802 clat (usec): min=269, max=42045, avg=37630.71, stdev=11498.11 00:09:28.802 lat (usec): min=293, max=42069, avg=37652.84, stdev=11499.56 00:09:28.802 clat percentiles (usec): 00:09:28.802 | 1.00th=[ 269], 5.00th=[ 343], 10.00th=[40633], 20.00th=[40633], 00:09:28.802 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:28.802 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:28.802 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:28.802 | 99.99th=[42206] 00:09:28.802 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:09:28.802 slat (nsec): min=10947, max=46349, avg=12502.15, stdev=2898.86 00:09:28.802 clat (usec): min=127, max=338, avg=197.46, stdev=29.72 00:09:28.802 lat (usec): min=139, max=376, avg=209.97, stdev=29.98 00:09:28.802 clat percentiles (usec): 00:09:28.802 | 1.00th=[ 141], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:09:28.802 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 208], 00:09:28.802 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 243], 00:09:28.802 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 338], 99.95th=[ 338], 00:09:28.802 | 99.99th=[ 338] 00:09:28.802 bw ( KiB/s): min= 4096, max= 4096, per=12.97%, avg=4096.00, stdev= 0.00, samples=1 00:09:28.802 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:28.802 lat (usec) : 250=92.72%, 500=3.17% 00:09:28.802 lat (msec) : 50=4.10% 00:09:28.802 cpu : usr=0.40%, sys=0.89%, ctx=537, majf=0, minf=1 00:09:28.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.802 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.802 job3: (groupid=0, jobs=1): err= 0: pid=1600089: Fri Dec 6 11:11:01 2024 00:09:28.802 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:28.802 slat (nsec): min=6904, max=24758, avg=7827.68, stdev=917.79 00:09:28.802 clat (usec): min=185, max=539, avg=263.66, stdev=63.78 00:09:28.802 lat (usec): min=193, max=546, avg=271.49, stdev=63.77 00:09:28.802 clat percentiles (usec): 00:09:28.802 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 227], 00:09:28.802 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:09:28.802 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 318], 95.00th=[ 461], 00:09:28.802 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 529], 00:09:28.802 | 99.99th=[ 537] 00:09:28.802 write: IOPS=2421, BW=9686KiB/s (9919kB/s)(9696KiB/1001msec); 0 zone resets 00:09:28.802 slat (nsec): min=9860, max=45040, avg=10911.04, stdev=1575.62 00:09:28.802 clat (usec): min=117, max=702, avg=168.44, stdev=36.27 00:09:28.802 lat (usec): min=128, max=713, avg=179.35, stdev=36.55 00:09:28.802 clat percentiles (usec): 00:09:28.802 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:09:28.802 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:09:28.802 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 239], 95.00th=[ 243], 00:09:28.802 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 502], 99.95th=[ 603], 00:09:28.802 | 99.99th=[ 701] 00:09:28.802 bw ( KiB/s): min= 8192, max= 8192, per=25.94%, avg=8192.00, stdev= 0.00, samples=1 00:09:28.802 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:28.802 lat (usec) : 250=77.06%, 500=22.56%, 750=0.38% 00:09:28.802 cpu : usr=1.90%, sys=4.70%, ctx=4474, majf=0, minf=1 00:09:28.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.802 issued rwts: total=2048,2424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.802 00:09:28.802 Run status group 0 (all jobs): 00:09:28.802 READ: bw=25.0MiB/s (26.2MB/s), 94.8KiB/s-9451KiB/s (97.0kB/s-9677kB/s), io=25.3MiB (26.6MB), run=1001-1013msec 00:09:28.802 WRITE: bw=30.8MiB/s (32.3MB/s), 2022KiB/s-9.99MiB/s (2070kB/s-10.5MB/s), io=31.2MiB (32.8MB), run=1001-1013msec 00:09:28.802 00:09:28.802 Disk stats (read/write): 00:09:28.802 nvme0n1: ios=1995/2048, merge=0/0, ticks=434/297, in_queue=731, util=83.37% 00:09:28.802 nvme0n2: ios=1674/2048, merge=0/0, ticks=1381/333, in_queue=1714, util=100.00% 00:09:28.802 nvme0n3: ios=77/512, merge=0/0, ticks=1461/94, in_queue=1555, util=98.92% 00:09:28.802 nvme0n4: ios=1635/2048, merge=0/0, ticks=1336/345, in_queue=1681, util=100.00% 00:09:28.802 11:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:28.802 [global] 00:09:28.802 thread=1 00:09:28.802 invalidate=1 00:09:28.802 rw=write 00:09:28.802 time_based=1 00:09:28.802 runtime=1 00:09:28.802 ioengine=libaio 00:09:28.802 direct=1 00:09:28.802 bs=4096 00:09:28.802 iodepth=128 00:09:28.802 norandommap=0 00:09:28.802 numjobs=1 00:09:28.802 00:09:28.802 verify_dump=1 00:09:28.802 verify_backlog=512 00:09:28.802 verify_state_save=0 00:09:28.802 do_verify=1 00:09:28.802 verify=crc32c-intel 00:09:28.802 [job0] 00:09:28.802 filename=/dev/nvme0n1 00:09:28.802 [job1] 00:09:28.802 filename=/dev/nvme0n2 00:09:28.802 [job2] 00:09:28.802 filename=/dev/nvme0n3 00:09:28.802 [job3] 00:09:28.802 filename=/dev/nvme0n4 00:09:28.802 Could not set queue depth (nvme0n1) 00:09:28.802 Could not set queue depth (nvme0n2) 00:09:28.802 Could not set queue depth (nvme0n3) 00:09:28.802 Could not set queue depth (nvme0n4) 00:09:29.070 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.070 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.070 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.070 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.070 fio-3.35 00:09:29.070 Starting 4 threads 00:09:30.475 00:09:30.475 job0: (groupid=0, jobs=1): err= 0: pid=1600509: Fri Dec 6 11:11:03 2024 00:09:30.475 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:09:30.475 slat (nsec): min=1363, max=10752k, avg=116779.13, stdev=737382.20 00:09:30.475 clat (usec): min=4950, max=48673, avg=12601.83, stdev=6008.71 00:09:30.475 lat (usec): min=4961, max=48681, avg=12718.61, stdev=6103.52 00:09:30.475 clat percentiles (usec): 00:09:30.475 | 1.00th=[ 6325], 5.00th=[ 8455], 10.00th=[ 8586], 20.00th=[ 9110], 00:09:30.475 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:09:30.475 | 70.00th=[11863], 80.00th=[12649], 90.00th=[19530], 95.00th=[26870], 00:09:30.475 | 99.00th=[39060], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:09:30.475 | 99.99th=[48497] 00:09:30.475 write: IOPS=3535, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1009msec); 0 zone resets 00:09:30.475 slat (usec): min=2, max=50009, avg=167.55, stdev=1160.32 00:09:30.475 clat (usec): min=2543, max=62454, avg=22340.71, stdev=15220.37 00:09:30.475 lat (usec): min=2549, max=83535, avg=22508.26, stdev=15345.97 00:09:30.475 clat percentiles (usec): 00:09:30.475 | 1.00th=[ 3982], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 8979], 00:09:30.475 | 30.00th=[ 9765], 40.00th=[13698], 50.00th=[19268], 60.00th=[23200], 00:09:30.475 | 70.00th=[26346], 80.00th=[35390], 90.00th=[50594], 95.00th=[53216], 00:09:30.475 | 99.00th=[55837], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:09:30.475 | 99.99th=[62653] 00:09:30.475 bw ( KiB/s): min=12464, max=15056, per=19.94%, avg=13760.00, stdev=1832.82, samples=2 00:09:30.475 iops : min= 3116, max= 3764, avg=3440.00, stdev=458.21, samples=2 00:09:30.475 lat (msec) : 4=0.62%, 10=29.10%, 20=41.74%, 50=22.77%, 100=5.77% 00:09:30.475 cpu : usr=1.88%, sys=5.46%, ctx=363, majf=0, minf=1 00:09:30.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:30.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.475 issued rwts: total=3072,3567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.475 job1: (groupid=0, jobs=1): err= 0: pid=1600526: Fri Dec 6 11:11:03 2024 00:09:30.475 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:09:30.475 slat (nsec): min=1223, max=9937.4k, avg=110094.71, stdev=664132.19 00:09:30.475 clat (usec): min=4821, max=35920, avg=12129.88, stdev=5382.22 00:09:30.475 lat (usec): min=4831, max=35930, avg=12239.97, stdev=5437.86 00:09:30.475 clat percentiles (usec): 00:09:30.475 | 1.00th=[ 5997], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9241], 00:09:30.475 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10814], 00:09:30.475 | 70.00th=[11207], 80.00th=[12911], 90.00th=[20055], 95.00th=[25560], 00:09:30.475 | 99.00th=[31589], 99.50th=[32113], 99.90th=[35914], 99.95th=[35914], 00:09:30.475 | 99.99th=[35914] 00:09:30.475 write: IOPS=3199, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1012msec); 0 zone resets 00:09:30.475 slat (usec): min=2, max=9822, avg=198.05, stdev=878.67 00:09:30.475 clat (usec): min=1206, max=89155, avg=28064.92, stdev=20367.73 00:09:30.475 lat (usec): min=1218, max=89160, avg=28262.97, stdev=20485.72 00:09:30.475 clat percentiles (usec): 00:09:30.475 | 1.00th=[ 3425], 5.00th=[ 4948], 10.00th=[ 6587], 20.00th=[10028], 00:09:30.475 | 30.00th=[17433], 40.00th=[19792], 50.00th=[23462], 60.00th=[26084], 00:09:30.475 | 70.00th=[30016], 80.00th=[45351], 90.00th=[58983], 95.00th=[76022], 00:09:30.475 | 99.00th=[85459], 99.50th=[87557], 99.90th=[89654], 99.95th=[89654], 00:09:30.475 | 99.99th=[89654] 00:09:30.475 bw ( KiB/s): min=11080, max=13808, per=18.03%, avg=12444.00, stdev=1928.99, samples=2 00:09:30.475 iops : min= 2770, max= 3452, avg=3111.00, stdev=482.25, samples=2 00:09:30.475 lat (msec) : 2=0.03%, 4=0.71%, 10=31.74%, 20=32.16%, 50=27.35% 00:09:30.475 lat (msec) : 100=8.00% 00:09:30.475 cpu : usr=2.08%, sys=4.75%, ctx=379, majf=0, minf=1 00:09:30.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:30.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.475 issued rwts: total=3072,3238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.475 job2: (groupid=0, jobs=1): err= 0: pid=1600545: Fri Dec 6 11:11:03 2024 00:09:30.475 read: IOPS=6777, BW=26.5MiB/s (27.8MB/s)(26.5MiB/1002msec) 00:09:30.475 slat (nsec): min=962, max=10380k, avg=72061.39, stdev=494169.44 00:09:30.475 clat (usec): min=1031, max=22089, avg=9634.62, stdev=2335.61 00:09:30.475 lat (usec): min=1035, max=22091, avg=9706.68, stdev=2369.20 00:09:30.475 clat percentiles (usec): 00:09:30.475 | 1.00th=[ 5407], 5.00th=[ 6980], 10.00th=[ 7832], 20.00th=[ 8455], 00:09:30.475 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:09:30.475 | 70.00th=[10028], 80.00th=[10945], 90.00th=[11863], 95.00th=[15139], 00:09:30.475 | 99.00th=[18482], 99.50th=[20055], 99.90th=[21890], 99.95th=[22152], 00:09:30.475 | 99.99th=[22152] 00:09:30.475 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:09:30.475 slat (nsec): min=1782, max=9031.4k, avg=64290.77, stdev=396050.12 00:09:30.475 clat (usec): min=235, max=20290, avg=8601.00, stdev=1722.49 00:09:30.475 lat (usec): min=802, max=20294, avg=8665.29, stdev=1761.70 00:09:30.475 clat percentiles (usec): 00:09:30.475 | 1.00th=[ 2737], 5.00th=[ 5473], 10.00th=[ 6915], 20.00th=[ 7963], 00:09:30.475 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:09:30.475 | 70.00th=[ 8717], 80.00th=[10028], 90.00th=[10945], 95.00th=[11076], 00:09:30.475 | 99.00th=[11994], 99.50th=[13566], 99.90th=[17171], 99.95th=[19530], 00:09:30.475 | 99.99th=[20317] 00:09:30.475 bw ( KiB/s): min=26528, max=26528, per=38.43%, avg=26528.00, stdev= 0.00, samples=1 00:09:30.475 iops : min= 6632, max= 6632, avg=6632.00, stdev= 0.00, samples=1 00:09:30.475 lat (usec) : 250=0.01%, 1000=0.07% 00:09:30.475 lat (msec) : 2=0.34%, 4=0.55%, 10=73.80%, 20=24.98%, 50=0.24% 00:09:30.475 cpu : usr=4.60%, sys=7.69%, ctx=616, majf=0, minf=2 00:09:30.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:30.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.475 issued rwts: total=6791,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.475 job3: (groupid=0, jobs=1): err= 0: pid=1600551: Fri Dec 6 11:11:03 2024 00:09:30.475 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:09:30.475 slat (nsec): min=1360, max=23897k, avg=151504.90, stdev=1061265.04 00:09:30.475 clat (usec): min=4723, max=67594, avg=19611.25, stdev=12701.96 00:09:30.475 lat (usec): min=4745, max=70929, avg=19762.75, stdev=12805.06 00:09:30.475 clat percentiles (usec): 00:09:30.475 | 1.00th=[ 5473], 5.00th=[ 8160], 10.00th=[ 9634], 20.00th=[10552], 00:09:30.475 | 30.00th=[10814], 40.00th=[11207], 50.00th=[13566], 60.00th=[18744], 00:09:30.475 | 70.00th=[20317], 80.00th=[30802], 90.00th=[39584], 95.00th=[46400], 00:09:30.475 | 99.00th=[62653], 99.50th=[65274], 99.90th=[67634], 99.95th=[67634], 00:09:30.475 | 99.99th=[67634] 00:09:30.475 write: IOPS=3469, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1006msec); 0 zone resets 00:09:30.475 slat (nsec): min=1772, max=31283k, avg=142913.07, stdev=863844.64 00:09:30.475 clat (usec): min=1120, max=59531, avg=17864.67, stdev=11924.34 00:09:30.475 lat (usec): min=1138, max=59534, avg=18007.58, stdev=12004.17 00:09:30.475 clat percentiles (usec): 00:09:30.475 | 1.00th=[ 5866], 5.00th=[ 7046], 10.00th=[ 8979], 20.00th=[10028], 00:09:30.475 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11338], 60.00th=[13960], 00:09:30.475 | 70.00th=[21365], 80.00th=[25560], 90.00th=[37487], 95.00th=[47449], 00:09:30.475 | 99.00th=[52167], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:09:30.475 | 99.99th=[59507] 00:09:30.475 bw ( KiB/s): min= 7648, max=19256, per=19.49%, avg=13452.00, stdev=8208.10, samples=2 00:09:30.475 iops : min= 1912, max= 4814, avg=3363.00, stdev=2052.02, samples=2 00:09:30.475 lat (msec) : 2=0.03%, 10=15.30%, 20=53.26%, 50=29.15%, 100=2.26% 00:09:30.475 cpu : usr=2.29%, sys=3.78%, ctx=399, majf=0, minf=1 00:09:30.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:30.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.475 issued rwts: total=3072,3490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.475 00:09:30.475 Run status group 0 (all jobs): 00:09:30.475 READ: bw=61.8MiB/s (64.8MB/s), 11.9MiB/s-26.5MiB/s (12.4MB/s-27.8MB/s), io=62.5MiB (65.6MB), run=1002-1012msec 00:09:30.475 WRITE: bw=67.4MiB/s (70.7MB/s), 12.5MiB/s-27.9MiB/s (13.1MB/s-29.3MB/s), io=68.2MiB (71.5MB), run=1002-1012msec 00:09:30.475 00:09:30.475 Disk stats (read/write): 00:09:30.475 nvme0n1: ios=2568/2673, merge=0/0, ticks=33074/62685, in_queue=95759, util=91.48% 00:09:30.475 nvme0n2: ios=2600/2767, merge=0/0, ticks=30272/71313, in_queue=101585, util=96.34% 00:09:30.475 nvme0n3: ios=5689/6079, merge=0/0, ticks=37194/33836, in_queue=71030, util=90.40% 00:09:30.475 nvme0n4: ios=3047/3072, merge=0/0, ticks=22865/20107, in_queue=42972, util=99.58% 00:09:30.476 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:30.476 [global] 00:09:30.476 thread=1 00:09:30.476 invalidate=1 00:09:30.476 rw=randwrite 00:09:30.476 time_based=1 00:09:30.476 runtime=1 00:09:30.476 ioengine=libaio 00:09:30.476 direct=1 00:09:30.476 bs=4096 00:09:30.476 iodepth=128 00:09:30.476 norandommap=0 00:09:30.476 numjobs=1 00:09:30.476 00:09:30.476 verify_dump=1 00:09:30.476 verify_backlog=512 00:09:30.476 verify_state_save=0 00:09:30.476 do_verify=1 00:09:30.476 verify=crc32c-intel 00:09:30.476 [job0] 00:09:30.476 filename=/dev/nvme0n1 00:09:30.476 [job1] 00:09:30.476 filename=/dev/nvme0n2 00:09:30.476 [job2] 00:09:30.476 filename=/dev/nvme0n3 00:09:30.476 [job3] 00:09:30.476 filename=/dev/nvme0n4 00:09:30.476 Could not set queue depth (nvme0n1) 00:09:30.476 Could not set queue depth (nvme0n2) 00:09:30.476 Could not set queue depth (nvme0n3) 00:09:30.476 Could not set queue depth (nvme0n4) 00:09:30.733 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.733 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.733 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.733 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.734 fio-3.35 00:09:30.734 Starting 4 threads 00:09:32.108 00:09:32.108 job0: (groupid=0, jobs=1): err= 0: pid=1601002: Fri Dec 6 11:11:04 2024 00:09:32.108 read: IOPS=3183, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1005msec) 00:09:32.108 slat (nsec): min=1355, max=27684k, avg=133319.27, stdev=1010876.07 00:09:32.108 clat (usec): min=2521, max=70076, avg=15611.12, stdev=9761.38 00:09:32.108 lat (usec): min=4730, max=70101, avg=15744.44, stdev=9875.45 00:09:32.108 clat percentiles (usec): 00:09:32.108 | 1.00th=[ 5473], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10945], 00:09:32.108 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12387], 60.00th=[12911], 00:09:32.108 | 70.00th=[13173], 80.00th=[15401], 90.00th=[30802], 95.00th=[36963], 00:09:32.108 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[64750], 00:09:32.108 | 99.99th=[69731] 00:09:32.108 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:09:32.108 slat (usec): min=2, max=12883, avg=145.55, stdev=778.16 00:09:32.108 clat (msec): min=2, max=103, avg=21.63, stdev=18.45 00:09:32.108 lat (msec): min=2, max=106, avg=21.78, stdev=18.55 00:09:32.108 clat percentiles (msec): 00:09:32.108 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 11], 00:09:32.108 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 17], 60.00th=[ 19], 00:09:32.108 | 70.00th=[ 20], 80.00th=[ 29], 90.00th=[ 51], 95.00th=[ 54], 00:09:32.108 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 105], 00:09:32.108 | 99.99th=[ 105] 00:09:32.108 bw ( KiB/s): min=14264, max=14400, per=18.73%, avg=14332.00, stdev=96.17, samples=2 00:09:32.108 iops : min= 3566, max= 3600, avg=3583.00, stdev=24.04, samples=2 00:09:32.108 lat (msec) : 4=1.11%, 10=16.20%, 20=60.74%, 50=15.63%, 100=6.09% 00:09:32.108 lat (msec) : 250=0.24% 00:09:32.108 cpu : usr=2.89%, sys=4.18%, ctx=367, majf=0, minf=1 00:09:32.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:32.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.108 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.108 job1: (groupid=0, jobs=1): err= 0: pid=1601019: Fri Dec 6 11:11:04 2024 00:09:32.108 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:09:32.108 slat (nsec): min=1151, max=13952k, avg=105552.50, stdev=763757.16 00:09:32.108 clat (usec): min=5281, max=53392, avg=13982.29, stdev=6802.19 00:09:32.108 lat (usec): min=5973, max=55608, avg=14087.84, stdev=6870.18 00:09:32.108 clat percentiles (usec): 00:09:32.108 | 1.00th=[ 6325], 5.00th=[ 7570], 10.00th=[ 9241], 20.00th=[10159], 00:09:32.108 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[12780], 00:09:32.108 | 70.00th=[13829], 80.00th=[16581], 90.00th=[23987], 95.00th=[29492], 00:09:32.108 | 99.00th=[41681], 99.50th=[47449], 99.90th=[53216], 99.95th=[53216], 00:09:32.108 | 99.99th=[53216] 00:09:32.108 write: IOPS=3886, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1008msec); 0 zone resets 00:09:32.108 slat (usec): min=2, max=16131, avg=142.62, stdev=761.73 00:09:32.108 clat (usec): min=2397, max=53411, avg=19844.38, stdev=10901.56 00:09:32.108 lat (usec): min=2457, max=53425, avg=19987.00, stdev=10981.43 00:09:32.108 clat percentiles (usec): 00:09:32.108 | 1.00th=[ 5080], 5.00th=[ 7439], 10.00th=[ 8848], 20.00th=[ 9634], 00:09:32.108 | 30.00th=[11207], 40.00th=[15270], 50.00th=[19006], 60.00th=[19530], 00:09:32.108 | 70.00th=[21365], 80.00th=[29754], 90.00th=[37487], 95.00th=[43254], 00:09:32.108 | 99.00th=[46400], 99.50th=[46924], 99.90th=[49021], 99.95th=[53216], 00:09:32.108 | 99.99th=[53216] 00:09:32.108 bw ( KiB/s): min=13936, max=16384, per=19.82%, avg=15160.00, stdev=1731.00, samples=2 00:09:32.108 iops : min= 3484, max= 4096, avg=3790.00, stdev=432.75, samples=2 00:09:32.108 lat (msec) : 4=0.17%, 10=21.90%, 20=52.00%, 50=25.73%, 100=0.20% 00:09:32.108 cpu : usr=3.38%, sys=4.97%, ctx=367, majf=0, minf=2 00:09:32.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:32.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.108 issued rwts: total=3584,3918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.108 job2: (groupid=0, jobs=1): err= 0: pid=1601039: Fri Dec 6 11:11:04 2024 00:09:32.108 read: IOPS=5362, BW=20.9MiB/s (22.0MB/s)(21.1MiB/1005msec) 00:09:32.108 slat (nsec): min=1273, max=9999.1k, avg=96945.38, stdev=676801.82 00:09:32.108 clat (usec): min=3867, max=21595, avg=11937.80, stdev=2952.71 00:09:32.108 lat (usec): min=3873, max=21604, avg=12034.74, stdev=2985.30 00:09:32.108 clat percentiles (usec): 00:09:32.108 | 1.00th=[ 4015], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[10290], 00:09:32.108 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:09:32.108 | 70.00th=[11863], 80.00th=[14222], 90.00th=[16450], 95.00th=[17957], 00:09:32.108 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20317], 99.95th=[20579], 00:09:32.108 | 99.99th=[21627] 00:09:32.108 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:09:32.108 slat (usec): min=2, max=27840, avg=79.56, stdev=590.91 00:09:32.108 clat (usec): min=2428, max=49498, avg=11191.27, stdev=4413.48 00:09:32.108 lat (usec): min=2437, max=49519, avg=11270.83, stdev=4463.06 00:09:32.108 clat percentiles (usec): 00:09:32.108 | 1.00th=[ 3064], 5.00th=[ 4817], 10.00th=[ 6390], 20.00th=[ 9503], 00:09:32.108 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:09:32.108 | 70.00th=[11207], 80.00th=[11338], 90.00th=[15401], 95.00th=[19530], 00:09:32.108 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30278], 99.95th=[35914], 00:09:32.108 | 99.99th=[49546] 00:09:32.108 bw ( KiB/s): min=21424, max=23632, per=29.45%, avg=22528.00, stdev=1561.29, samples=2 00:09:32.108 iops : min= 5356, max= 5908, avg=5632.00, stdev=390.32, samples=2 00:09:32.108 lat (msec) : 4=1.86%, 10=17.01%, 20=78.34%, 50=2.79% 00:09:32.109 cpu : usr=3.88%, sys=5.78%, ctx=717, majf=0, minf=1 00:09:32.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:32.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.109 issued rwts: total=5389,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.109 job3: (groupid=0, jobs=1): err= 0: pid=1601040: Fri Dec 6 11:11:04 2024 00:09:32.109 read: IOPS=5860, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1003msec) 00:09:32.109 slat (nsec): min=1235, max=5269.4k, avg=83894.28, stdev=470167.60 00:09:32.109 clat (usec): min=1654, max=15623, avg=10651.33, stdev=1530.79 00:09:32.109 lat (usec): min=3905, max=15631, avg=10735.23, stdev=1567.86 00:09:32.109 clat percentiles (usec): 00:09:32.109 | 1.00th=[ 6652], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[10159], 00:09:32.109 | 30.00th=[10421], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:09:32.109 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12649], 95.00th=[13304], 00:09:32.109 | 99.00th=[14615], 99.50th=[15008], 99.90th=[15270], 99.95th=[15401], 00:09:32.109 | 99.99th=[15664] 00:09:32.109 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:09:32.109 slat (usec): min=2, max=5175, avg=76.86, stdev=406.05 00:09:32.109 clat (usec): min=5067, max=15438, avg=10478.60, stdev=1204.54 00:09:32.109 lat (usec): min=5074, max=15727, avg=10555.46, stdev=1241.98 00:09:32.109 clat percentiles (usec): 00:09:32.109 | 1.00th=[ 6521], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10028], 00:09:32.109 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:09:32.109 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[12911], 00:09:32.109 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15139], 99.95th=[15139], 00:09:32.109 | 99.99th=[15401] 00:09:32.109 bw ( KiB/s): min=24576, max=24576, per=32.13%, avg=24576.00, stdev= 0.00, samples=2 00:09:32.109 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:09:32.109 lat (msec) : 2=0.01%, 4=0.07%, 10=18.37%, 20=81.55% 00:09:32.109 cpu : usr=5.39%, sys=6.39%, ctx=637, majf=0, minf=1 00:09:32.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:32.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.109 issued rwts: total=5878,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.109 00:09:32.109 Run status group 0 (all jobs): 00:09:32.109 READ: bw=69.9MiB/s (73.3MB/s), 12.4MiB/s-22.9MiB/s (13.0MB/s-24.0MB/s), io=70.5MiB (73.9MB), run=1003-1008msec 00:09:32.109 WRITE: bw=74.7MiB/s (78.3MB/s), 13.9MiB/s-23.9MiB/s (14.6MB/s-25.1MB/s), io=75.3MiB (79.0MB), run=1003-1008msec 00:09:32.109 00:09:32.109 Disk stats (read/write): 00:09:32.109 nvme0n1: ios=2726/3072, merge=0/0, ticks=20113/37318, in_queue=57431, util=86.67% 00:09:32.109 nvme0n2: ios=3122/3183, merge=0/0, ticks=42238/62672, in_queue=104910, util=90.23% 00:09:32.109 nvme0n3: ios=4619/4615, merge=0/0, ticks=53536/51444, in_queue=104980, util=93.40% 00:09:32.109 nvme0n4: ios=5055/5120, merge=0/0, ticks=26880/24985, in_queue=51865, util=95.35% 00:09:32.109 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:32.109 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1601170 00:09:32.109 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:32.109 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:32.109 [global] 00:09:32.109 thread=1 00:09:32.109 invalidate=1 00:09:32.109 rw=read 00:09:32.109 time_based=1 00:09:32.109 runtime=10 00:09:32.109 ioengine=libaio 00:09:32.109 direct=1 00:09:32.109 bs=4096 00:09:32.109 iodepth=1 00:09:32.109 norandommap=1 00:09:32.109 numjobs=1 00:09:32.109 00:09:32.109 [job0] 00:09:32.109 filename=/dev/nvme0n1 00:09:32.109 [job1] 00:09:32.109 filename=/dev/nvme0n2 00:09:32.109 [job2] 00:09:32.109 filename=/dev/nvme0n3 00:09:32.109 [job3] 00:09:32.109 filename=/dev/nvme0n4 00:09:32.109 Could not set queue depth (nvme0n1) 00:09:32.109 Could not set queue depth (nvme0n2) 00:09:32.109 Could not set queue depth (nvme0n3) 00:09:32.109 Could not set queue depth (nvme0n4) 00:09:32.367 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.367 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.367 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.367 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.367 fio-3.35 00:09:32.367 Starting 4 threads 00:09:34.899 11:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:35.157 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=20062208, buflen=4096 00:09:35.157 fio: pid=1601545, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:35.157 11:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:35.415 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.415 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:35.415 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47157248, buflen=4096 00:09:35.415 fio: pid=1601540, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:35.416 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.416 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:35.416 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=52707328, buflen=4096 00:09:35.416 fio: pid=1601508, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:35.675 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=44548096, buflen=4096 00:09:35.675 fio: pid=1601523, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:35.675 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.675 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:35.675 00:09:35.675 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1601508: Fri Dec 6 11:11:08 2024 00:09:35.675 read: IOPS=4123, BW=16.1MiB/s (16.9MB/s)(50.3MiB/3121msec) 00:09:35.675 slat (usec): min=5, max=20349, avg=12.22, stdev=264.75 00:09:35.675 clat (usec): min=161, max=42008, avg=227.55, stdev=907.15 00:09:35.675 lat (usec): min=168, max=42025, avg=239.77, stdev=945.81 00:09:35.675 clat percentiles (usec): 00:09:35.675 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:09:35.675 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:09:35.675 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 237], 00:09:35.675 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 515], 99.95th=[22414], 00:09:35.675 | 99.99th=[41157] 00:09:35.675 bw ( KiB/s): min=12687, max=18760, per=34.22%, avg=16738.50, stdev=2866.84, samples=6 00:09:35.675 iops : min= 3171, max= 4690, avg=4184.50, stdev=716.92, samples=6 00:09:35.675 lat (usec) : 250=96.81%, 500=3.05%, 750=0.05% 00:09:35.675 lat (msec) : 2=0.02%, 10=0.01%, 50=0.05% 00:09:35.675 cpu : usr=1.31%, sys=3.62%, ctx=12873, majf=0, minf=1 00:09:35.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.675 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.675 issued rwts: total=12869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.675 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1601523: Fri Dec 6 11:11:08 2024 00:09:35.675 read: IOPS=3312, BW=12.9MiB/s (13.6MB/s)(42.5MiB/3284msec) 00:09:35.675 slat (usec): min=5, max=25967, avg=14.72, stdev=346.72 00:09:35.675 clat (usec): min=152, max=41347, avg=285.42, stdev=1605.86 00:09:35.675 lat (usec): min=160, max=41354, avg=300.14, stdev=1643.63 00:09:35.675 clat percentiles (usec): 00:09:35.675 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 194], 00:09:35.675 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:09:35.675 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 251], 95.00th=[ 314], 00:09:35.675 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[41157], 99.95th=[41157], 00:09:35.675 | 99.99th=[41157] 00:09:35.675 bw ( KiB/s): min= 8840, max=18240, per=26.70%, avg=13058.33, stdev=3932.15, samples=6 00:09:35.675 iops : min= 2210, max= 4560, avg=3264.50, stdev=983.07, samples=6 00:09:35.675 lat (usec) : 250=90.03%, 500=9.74%, 750=0.03% 00:09:35.675 lat (msec) : 10=0.02%, 20=0.01%, 50=0.17% 00:09:35.675 cpu : usr=0.73%, sys=3.11%, ctx=10884, majf=0, minf=1 00:09:35.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.675 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.675 issued rwts: total=10877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.675 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1601540: Fri Dec 6 11:11:08 2024 00:09:35.675 read: IOPS=3924, BW=15.3MiB/s (16.1MB/s)(45.0MiB/2934msec) 00:09:35.675 slat (usec): min=6, max=14741, avg= 9.66, stdev=159.77 00:09:35.675 clat (usec): min=163, max=41956, avg=242.31, stdev=952.57 00:09:35.675 lat (usec): min=170, max=41964, avg=251.97, stdev=966.13 00:09:35.675 clat percentiles (usec): 00:09:35.675 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:09:35.675 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:09:35.675 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 247], 00:09:35.675 | 99.00th=[ 277], 99.50th=[ 322], 99.90th=[ 445], 99.95th=[41157], 00:09:35.675 | 99.99th=[41681] 00:09:35.675 bw ( KiB/s): min=15016, max=17760, per=34.61%, avg=16928.00, stdev=1133.62, samples=5 00:09:35.675 iops : min= 3754, max= 4440, avg=4232.00, stdev=283.40, samples=5 00:09:35.675 lat (usec) : 250=96.08%, 500=3.82%, 750=0.02% 00:09:35.675 lat (msec) : 4=0.01%, 20=0.01%, 50=0.05% 00:09:35.675 cpu : usr=0.92%, sys=3.75%, ctx=11516, majf=0, minf=2 00:09:35.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.675 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.675 issued rwts: total=11514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.675 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1601545: Fri Dec 6 11:11:08 2024 00:09:35.675 read: IOPS=1817, BW=7267KiB/s (7441kB/s)(19.1MiB/2696msec) 00:09:35.675 slat (nsec): min=6868, max=54994, avg=10171.59, stdev=3415.76 00:09:35.675 clat (usec): min=176, max=41898, avg=533.69, stdev=3335.48 00:09:35.675 lat (usec): min=196, max=41908, avg=543.86, stdev=3335.78 00:09:35.675 clat percentiles (usec): 00:09:35.675 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:09:35.675 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:09:35.675 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 367], 00:09:35.675 | 99.00th=[ 457], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:35.675 | 99.99th=[41681] 00:09:35.675 bw ( KiB/s): min= 104, max=13992, per=16.00%, avg=7828.80, stdev=5329.78, samples=5 00:09:35.675 iops : min= 26, max= 3498, avg=1957.20, stdev=1332.45, samples=5 00:09:35.675 lat (usec) : 250=55.30%, 500=43.83%, 750=0.16% 00:09:35.675 lat (msec) : 20=0.02%, 50=0.67% 00:09:35.675 cpu : usr=0.78%, sys=2.93%, ctx=4899, majf=0, minf=2 00:09:35.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.675 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.675 issued rwts: total=4899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.675 00:09:35.675 Run status group 0 (all jobs): 00:09:35.675 READ: bw=47.8MiB/s (50.1MB/s), 7267KiB/s-16.1MiB/s (7441kB/s-16.9MB/s), io=157MiB (164MB), run=2696-3284msec 00:09:35.675 00:09:35.675 Disk stats (read/write): 00:09:35.675 nvme0n1: ios=12868/0, merge=0/0, ticks=2892/0, in_queue=2892, util=93.93% 00:09:35.675 nvme0n2: ios=10085/0, merge=0/0, ticks=2915/0, in_queue=2915, util=94.15% 00:09:35.675 nvme0n3: ios=11511/0, merge=0/0, ticks=2657/0, in_queue=2657, util=95.77% 00:09:35.675 nvme0n4: ios=4896/0, merge=0/0, ticks=2479/0, in_queue=2479, util=96.45% 00:09:35.934 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.934 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:36.192 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.192 11:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:36.192 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.192 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:36.451 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.451 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:36.709 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:36.709 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1601170 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:36.710 nvmf hotplug test: fio failed as expected 00:09:36.710 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.968 rmmod nvme_tcp 00:09:36.968 rmmod nvme_fabrics 00:09:36.968 rmmod nvme_keyring 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1598096 ']' 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1598096 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1598096 ']' 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1598096 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1598096 00:09:36.968 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.228 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.228 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1598096' 00:09:37.228 killing process with pid 1598096 00:09:37.228 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1598096 00:09:37.228 11:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1598096 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.228 11:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.976 00:09:39.976 real 0m27.466s 00:09:39.976 user 2m2.407s 00:09:39.976 sys 0m9.167s 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.976 ************************************ 00:09:39.976 END TEST nvmf_fio_target 00:09:39.976 ************************************ 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.976 ************************************ 00:09:39.976 START TEST nvmf_bdevio 00:09:39.976 ************************************ 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:39.976 * Looking for test storage... 00:09:39.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.976 --rc genhtml_branch_coverage=1 00:09:39.976 --rc genhtml_function_coverage=1 00:09:39.976 --rc genhtml_legend=1 00:09:39.976 --rc geninfo_all_blocks=1 00:09:39.976 --rc geninfo_unexecuted_blocks=1 00:09:39.976 00:09:39.976 ' 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.976 --rc genhtml_branch_coverage=1 00:09:39.976 --rc genhtml_function_coverage=1 00:09:39.976 --rc genhtml_legend=1 00:09:39.976 --rc geninfo_all_blocks=1 00:09:39.976 --rc geninfo_unexecuted_blocks=1 00:09:39.976 00:09:39.976 ' 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.976 --rc genhtml_branch_coverage=1 00:09:39.976 --rc genhtml_function_coverage=1 00:09:39.976 --rc genhtml_legend=1 00:09:39.976 --rc geninfo_all_blocks=1 00:09:39.976 --rc geninfo_unexecuted_blocks=1 00:09:39.976 00:09:39.976 ' 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.976 --rc genhtml_branch_coverage=1 00:09:39.976 --rc genhtml_function_coverage=1 00:09:39.976 --rc genhtml_legend=1 00:09:39.976 --rc geninfo_all_blocks=1 00:09:39.976 --rc geninfo_unexecuted_blocks=1 00:09:39.976 00:09:39.976 ' 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.976 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.977 11:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.259 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:45.260 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:45.260 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:45.260 Found net devices under 0000:af:00.0: cvl_0_0 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:45.260 Found net devices under 0000:af:00.1: cvl_0_1 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.260 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:09:45.520 00:09:45.520 --- 10.0.0.2 ping statistics --- 00:09:45.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.520 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:09:45.520 00:09:45.520 --- 10.0.0.1 ping statistics --- 00:09:45.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.520 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.520 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.521 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.521 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.521 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.521 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.521 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.521 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:45.521 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.521 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.521 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.779 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1606089 00:09:45.779 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:45.779 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1606089 00:09:45.779 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1606089 ']' 00:09:45.779 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.780 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.780 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.780 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.780 11:11:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.780 [2024-12-06 11:11:18.508363] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:09:45.780 [2024-12-06 11:11:18.508406] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.780 [2024-12-06 11:11:18.586000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.780 [2024-12-06 11:11:18.623916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.780 [2024-12-06 11:11:18.623952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.780 [2024-12-06 11:11:18.623963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.780 [2024-12-06 11:11:18.623968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.780 [2024-12-06 11:11:18.623973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.780 [2024-12-06 11:11:18.625602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.780 [2024-12-06 11:11:18.625715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:45.780 [2024-12-06 11:11:18.625801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.780 [2024-12-06 11:11:18.625802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.713 [2024-12-06 11:11:19.362797] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.713 Malloc0 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.713 [2024-12-06 11:11:19.424325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.713 { 00:09:46.713 "params": { 00:09:46.713 "name": "Nvme$subsystem", 00:09:46.713 "trtype": "$TEST_TRANSPORT", 00:09:46.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.713 "adrfam": "ipv4", 00:09:46.713 "trsvcid": "$NVMF_PORT", 00:09:46.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.713 "hdgst": ${hdgst:-false}, 00:09:46.713 "ddgst": ${ddgst:-false} 00:09:46.713 }, 00:09:46.713 "method": "bdev_nvme_attach_controller" 00:09:46.713 } 00:09:46.713 EOF 00:09:46.713 )") 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:46.713 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.713 "params": { 00:09:46.713 "name": "Nvme1", 00:09:46.713 "trtype": "tcp", 00:09:46.713 "traddr": "10.0.0.2", 00:09:46.713 "adrfam": "ipv4", 00:09:46.713 "trsvcid": "4420", 00:09:46.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.713 "hdgst": false, 00:09:46.713 "ddgst": false 00:09:46.713 }, 00:09:46.713 "method": "bdev_nvme_attach_controller" 00:09:46.713 }' 00:09:46.713 [2024-12-06 11:11:19.474484] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:09:46.713 [2024-12-06 11:11:19.474520] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606178 ] 00:09:46.713 [2024-12-06 11:11:19.545285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:46.713 [2024-12-06 11:11:19.585877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.713 [2024-12-06 11:11:19.585989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.713 [2024-12-06 11:11:19.585989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.972 I/O targets: 00:09:46.972 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:46.972 00:09:46.972 00:09:46.972 CUnit - A unit testing framework for C - Version 2.1-3 00:09:46.972 http://cunit.sourceforge.net/ 00:09:46.972 00:09:46.972 00:09:46.972 Suite: bdevio tests on: Nvme1n1 00:09:46.972 Test: blockdev write read block ...passed 00:09:46.972 Test: blockdev write zeroes read block ...passed 00:09:46.972 Test: blockdev write zeroes read no split ...passed 00:09:46.972 Test: blockdev write zeroes read split ...passed 00:09:47.230 Test: blockdev write zeroes read split partial ...passed 00:09:47.230 Test: blockdev reset ...[2024-12-06 11:11:19.938464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:47.230 [2024-12-06 11:11:19.938526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8d400 (9): Bad file descriptor 00:09:47.230 [2024-12-06 11:11:19.992131] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:47.230 passed 00:09:47.230 Test: blockdev write read 8 blocks ...passed 00:09:47.230 Test: blockdev write read size > 128k ...passed 00:09:47.230 Test: blockdev write read invalid size ...passed 00:09:47.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:47.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:47.230 Test: blockdev write read max offset ...passed 00:09:47.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:47.230 Test: blockdev writev readv 8 blocks ...passed 00:09:47.230 Test: blockdev writev readv 30 x 1block ...passed 00:09:47.488 Test: blockdev writev readv block ...passed 00:09:47.488 Test: blockdev writev readv size > 128k ...passed 00:09:47.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:47.488 Test: blockdev comparev and writev ...[2024-12-06 11:11:20.203728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.488 [2024-12-06 11:11:20.203757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:47.488 [2024-12-06 11:11:20.203774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.488 [2024-12-06 11:11:20.203782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:47.489 [2024-12-06 11:11:20.204030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.489 [2024-12-06 11:11:20.204040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:47.489 [2024-12-06 11:11:20.204050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.489 [2024-12-06 11:11:20.204056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:47.489 [2024-12-06 11:11:20.204281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.489 [2024-12-06 11:11:20.204291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:47.489 [2024-12-06 11:11:20.204302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.489 [2024-12-06 11:11:20.204308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:47.489 [2024-12-06 11:11:20.204511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.489 [2024-12-06 11:11:20.204521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:47.489 [2024-12-06 11:11:20.204531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.489 [2024-12-06 11:11:20.204539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:47.489 passed 00:09:47.489 Test: blockdev nvme passthru rw ...passed 00:09:47.489 Test: blockdev nvme passthru vendor specific ...[2024-12-06 11:11:20.286434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.489 [2024-12-06 11:11:20.286449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:47.489 [2024-12-06 11:11:20.286554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.489 [2024-12-06 11:11:20.286563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:47.489 [2024-12-06 11:11:20.286657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.489 [2024-12-06 11:11:20.286666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:47.489 [2024-12-06 11:11:20.286766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.489 [2024-12-06 11:11:20.286775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:47.489 passed 00:09:47.489 Test: blockdev nvme admin passthru ...passed 00:09:47.489 Test: blockdev copy ...passed 00:09:47.489 00:09:47.489 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.489 suites 1 1 n/a 0 0 00:09:47.489 tests 23 23 23 0 0 00:09:47.489 asserts 152 152 152 0 n/a 00:09:47.489 00:09:47.489 Elapsed time = 1.121 seconds 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.747 rmmod nvme_tcp 00:09:47.747 rmmod nvme_fabrics 00:09:47.747 rmmod nvme_keyring 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1606089 ']' 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1606089 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1606089 ']' 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1606089 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1606089 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1606089' 00:09:47.747 killing process with pid 1606089 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1606089 00:09:47.747 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1606089 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.006 11:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.546 11:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.546 00:09:50.546 real 0m10.656s 00:09:50.546 user 0m12.541s 00:09:50.546 sys 0m5.096s 00:09:50.546 11:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.546 11:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 ************************************ 00:09:50.546 END TEST nvmf_bdevio 00:09:50.546 ************************************ 00:09:50.546 11:11:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:50.546 00:09:50.546 real 4m41.114s 00:09:50.546 user 10m55.741s 00:09:50.546 sys 1m40.658s 00:09:50.546 11:11:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.546 11:11:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 ************************************ 00:09:50.546 END TEST nvmf_target_core 00:09:50.546 ************************************ 00:09:50.546 11:11:22 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:50.546 11:11:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.546 11:11:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.546 11:11:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 ************************************ 00:09:50.546 START TEST nvmf_target_extra 00:09:50.546 ************************************ 00:09:50.546 11:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:50.546 * Looking for test storage... 00:09:50.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.546 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:50.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.546 --rc genhtml_branch_coverage=1 00:09:50.546 --rc genhtml_function_coverage=1 00:09:50.546 --rc genhtml_legend=1 00:09:50.547 --rc geninfo_all_blocks=1 00:09:50.547 --rc geninfo_unexecuted_blocks=1 00:09:50.547 00:09:50.547 ' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:50.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.547 --rc genhtml_branch_coverage=1 00:09:50.547 --rc genhtml_function_coverage=1 00:09:50.547 --rc genhtml_legend=1 00:09:50.547 --rc geninfo_all_blocks=1 00:09:50.547 --rc geninfo_unexecuted_blocks=1 00:09:50.547 00:09:50.547 ' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:50.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.547 --rc genhtml_branch_coverage=1 00:09:50.547 --rc genhtml_function_coverage=1 00:09:50.547 --rc genhtml_legend=1 00:09:50.547 --rc geninfo_all_blocks=1 00:09:50.547 --rc geninfo_unexecuted_blocks=1 00:09:50.547 00:09:50.547 ' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:50.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.547 --rc genhtml_branch_coverage=1 00:09:50.547 --rc genhtml_function_coverage=1 00:09:50.547 --rc genhtml_legend=1 00:09:50.547 --rc geninfo_all_blocks=1 00:09:50.547 --rc geninfo_unexecuted_blocks=1 00:09:50.547 00:09:50.547 ' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:50.547 ************************************ 00:09:50.547 START TEST nvmf_example 00:09:50.547 ************************************ 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:50.547 * Looking for test storage... 00:09:50.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.547 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:50.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.547 --rc genhtml_branch_coverage=1 00:09:50.547 --rc genhtml_function_coverage=1 00:09:50.547 --rc genhtml_legend=1 00:09:50.547 --rc geninfo_all_blocks=1 00:09:50.547 --rc geninfo_unexecuted_blocks=1 00:09:50.548 00:09:50.548 ' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:50.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.548 --rc genhtml_branch_coverage=1 00:09:50.548 --rc genhtml_function_coverage=1 00:09:50.548 --rc genhtml_legend=1 00:09:50.548 --rc geninfo_all_blocks=1 00:09:50.548 --rc geninfo_unexecuted_blocks=1 00:09:50.548 00:09:50.548 ' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:50.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.548 --rc genhtml_branch_coverage=1 00:09:50.548 --rc genhtml_function_coverage=1 00:09:50.548 --rc genhtml_legend=1 00:09:50.548 --rc geninfo_all_blocks=1 00:09:50.548 --rc geninfo_unexecuted_blocks=1 00:09:50.548 00:09:50.548 ' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:50.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.548 --rc genhtml_branch_coverage=1 00:09:50.548 --rc genhtml_function_coverage=1 00:09:50.548 --rc genhtml_legend=1 00:09:50.548 --rc geninfo_all_blocks=1 00:09:50.548 --rc geninfo_unexecuted_blocks=1 00:09:50.548 00:09:50.548 ' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.548 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.116 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:57.117 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:57.117 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:57.117 Found net devices under 0000:af:00.0: cvl_0_0 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:57.117 Found net devices under 0000:af:00.1: cvl_0_1 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:09:57.117 00:09:57.117 --- 10.0.0.2 ping statistics --- 00:09:57.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.117 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:57.117 00:09:57.117 --- 10.0.0.1 ping statistics --- 00:09:57.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.117 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1610224 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1610224 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1610224 ']' 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.117 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:57.682 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:09.917 Initializing NVMe Controllers 00:10:09.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:09.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:09.917 Initialization complete. Launching workers. 00:10:09.917 ======================================================== 00:10:09.917 Latency(us) 00:10:09.917 Device Information : IOPS MiB/s Average min max 00:10:09.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19691.20 76.92 3249.71 505.63 15631.92 00:10:09.917 ======================================================== 00:10:09.917 Total : 19691.20 76.92 3249.71 505.63 15631.92 00:10:09.917 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.917 rmmod nvme_tcp 00:10:09.917 rmmod nvme_fabrics 00:10:09.917 rmmod nvme_keyring 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1610224 ']' 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1610224 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1610224 ']' 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1610224 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1610224 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1610224' 00:10:09.917 killing process with pid 1610224 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1610224 00:10:09.917 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1610224 00:10:09.917 nvmf threads initialize successfully 00:10:09.917 bdev subsystem init successfully 00:10:09.917 created a nvmf target service 00:10:09.917 create targets's poll groups done 00:10:09.917 all subsystems of target started 00:10:09.918 nvmf target is running 00:10:09.918 all subsystems of target stopped 00:10:09.918 destroy targets's poll groups done 00:10:09.918 destroyed the nvmf target service 00:10:09.918 bdev subsystem finish successfully 00:10:09.918 nvmf threads destroy successfully 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.918 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.487 00:10:10.487 real 0m19.996s 00:10:10.487 user 0m46.353s 00:10:10.487 sys 0m6.155s 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.487 ************************************ 00:10:10.487 END TEST nvmf_example 00:10:10.487 ************************************ 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:10.487 ************************************ 00:10:10.487 START TEST nvmf_filesystem 00:10:10.487 ************************************ 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:10.487 * Looking for test storage... 00:10:10.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.487 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:10.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.751 --rc genhtml_branch_coverage=1 00:10:10.751 --rc genhtml_function_coverage=1 00:10:10.751 --rc genhtml_legend=1 00:10:10.751 --rc geninfo_all_blocks=1 00:10:10.751 --rc geninfo_unexecuted_blocks=1 00:10:10.751 00:10:10.751 ' 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:10.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.751 --rc genhtml_branch_coverage=1 00:10:10.751 --rc genhtml_function_coverage=1 00:10:10.751 --rc genhtml_legend=1 00:10:10.751 --rc geninfo_all_blocks=1 00:10:10.751 --rc geninfo_unexecuted_blocks=1 00:10:10.751 00:10:10.751 ' 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:10.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.751 --rc genhtml_branch_coverage=1 00:10:10.751 --rc genhtml_function_coverage=1 00:10:10.751 --rc genhtml_legend=1 00:10:10.751 --rc geninfo_all_blocks=1 00:10:10.751 --rc geninfo_unexecuted_blocks=1 00:10:10.751 00:10:10.751 ' 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:10.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.751 --rc genhtml_branch_coverage=1 00:10:10.751 --rc genhtml_function_coverage=1 00:10:10.751 --rc genhtml_legend=1 00:10:10.751 --rc geninfo_all_blocks=1 00:10:10.751 --rc geninfo_unexecuted_blocks=1 00:10:10.751 00:10:10.751 ' 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:10.751 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:10.752 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:10.753 #define SPDK_CONFIG_H 00:10:10.753 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:10.753 #define SPDK_CONFIG_APPS 1 00:10:10.753 #define SPDK_CONFIG_ARCH native 00:10:10.753 #undef SPDK_CONFIG_ASAN 00:10:10.753 #undef SPDK_CONFIG_AVAHI 00:10:10.753 #undef SPDK_CONFIG_CET 00:10:10.753 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:10.753 #define SPDK_CONFIG_COVERAGE 1 00:10:10.753 #define SPDK_CONFIG_CROSS_PREFIX 00:10:10.753 #undef SPDK_CONFIG_CRYPTO 00:10:10.753 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:10.753 #undef SPDK_CONFIG_CUSTOMOCF 00:10:10.753 #undef SPDK_CONFIG_DAOS 00:10:10.753 #define SPDK_CONFIG_DAOS_DIR 00:10:10.753 #define SPDK_CONFIG_DEBUG 1 00:10:10.753 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:10.753 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:10.753 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:10.753 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:10.753 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:10.753 #undef SPDK_CONFIG_DPDK_UADK 00:10:10.753 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:10.753 #define SPDK_CONFIG_EXAMPLES 1 00:10:10.753 #undef SPDK_CONFIG_FC 00:10:10.753 #define SPDK_CONFIG_FC_PATH 00:10:10.753 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:10.753 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:10.753 #define SPDK_CONFIG_FSDEV 1 00:10:10.753 #undef SPDK_CONFIG_FUSE 00:10:10.753 #undef SPDK_CONFIG_FUZZER 00:10:10.753 #define SPDK_CONFIG_FUZZER_LIB 00:10:10.753 #undef SPDK_CONFIG_GOLANG 00:10:10.753 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:10.753 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:10.753 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:10.753 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:10.753 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:10.753 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:10.753 #undef SPDK_CONFIG_HAVE_LZ4 00:10:10.753 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:10.753 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:10.753 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:10.753 #define SPDK_CONFIG_IDXD 1 00:10:10.753 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:10.753 #undef SPDK_CONFIG_IPSEC_MB 00:10:10.753 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:10.753 #define SPDK_CONFIG_ISAL 1 00:10:10.753 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:10.753 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:10.753 #define SPDK_CONFIG_LIBDIR 00:10:10.753 #undef SPDK_CONFIG_LTO 00:10:10.753 #define SPDK_CONFIG_MAX_LCORES 128 00:10:10.753 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:10.753 #define SPDK_CONFIG_NVME_CUSE 1 00:10:10.753 #undef SPDK_CONFIG_OCF 00:10:10.753 #define SPDK_CONFIG_OCF_PATH 00:10:10.753 #define SPDK_CONFIG_OPENSSL_PATH 00:10:10.753 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:10.753 #define SPDK_CONFIG_PGO_DIR 00:10:10.753 #undef SPDK_CONFIG_PGO_USE 00:10:10.753 #define SPDK_CONFIG_PREFIX /usr/local 00:10:10.753 #undef SPDK_CONFIG_RAID5F 00:10:10.753 #undef SPDK_CONFIG_RBD 00:10:10.753 #define SPDK_CONFIG_RDMA 1 00:10:10.753 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:10.753 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:10.753 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:10.753 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:10.753 #define SPDK_CONFIG_SHARED 1 00:10:10.753 #undef SPDK_CONFIG_SMA 00:10:10.753 #define SPDK_CONFIG_TESTS 1 00:10:10.753 #undef SPDK_CONFIG_TSAN 00:10:10.753 #define SPDK_CONFIG_UBLK 1 00:10:10.753 #define SPDK_CONFIG_UBSAN 1 00:10:10.753 #undef SPDK_CONFIG_UNIT_TESTS 00:10:10.753 #undef SPDK_CONFIG_URING 00:10:10.753 #define SPDK_CONFIG_URING_PATH 00:10:10.753 #undef SPDK_CONFIG_URING_ZNS 00:10:10.753 #undef SPDK_CONFIG_USDT 00:10:10.753 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:10.753 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:10.753 #define SPDK_CONFIG_VFIO_USER 1 00:10:10.753 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:10.753 #define SPDK_CONFIG_VHOST 1 00:10:10.753 #define SPDK_CONFIG_VIRTIO 1 00:10:10.753 #undef SPDK_CONFIG_VTUNE 00:10:10.753 #define SPDK_CONFIG_VTUNE_DIR 00:10:10.753 #define SPDK_CONFIG_WERROR 1 00:10:10.753 #define SPDK_CONFIG_WPDK_DIR 00:10:10.753 #undef SPDK_CONFIG_XNVME 00:10:10.753 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:10.753 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:10.754 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:10.755 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1612763 ]] 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1612763 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.37jglK 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.37jglK/tests/target /tmp/spdk.37jglK 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88519151616 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=94489763840 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5970612224 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47234850816 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47244881920 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=18874851328 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=18897952768 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23101440 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47244402688 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47244881920 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=479232 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9448964096 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9448976384 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:10.756 * Looking for test storage... 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88519151616 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8185204736 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:10.756 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:10.757 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:10.757 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:10.757 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.757 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.757 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.017 --rc genhtml_branch_coverage=1 00:10:11.017 --rc genhtml_function_coverage=1 00:10:11.017 --rc genhtml_legend=1 00:10:11.017 --rc geninfo_all_blocks=1 00:10:11.017 --rc geninfo_unexecuted_blocks=1 00:10:11.017 00:10:11.017 ' 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.017 --rc genhtml_branch_coverage=1 00:10:11.017 --rc genhtml_function_coverage=1 00:10:11.017 --rc genhtml_legend=1 00:10:11.017 --rc geninfo_all_blocks=1 00:10:11.017 --rc geninfo_unexecuted_blocks=1 00:10:11.017 00:10:11.017 ' 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.017 --rc genhtml_branch_coverage=1 00:10:11.017 --rc genhtml_function_coverage=1 00:10:11.017 --rc genhtml_legend=1 00:10:11.017 --rc geninfo_all_blocks=1 00:10:11.017 --rc geninfo_unexecuted_blocks=1 00:10:11.017 00:10:11.017 ' 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.017 --rc genhtml_branch_coverage=1 00:10:11.017 --rc genhtml_function_coverage=1 00:10:11.017 --rc genhtml_legend=1 00:10:11.017 --rc geninfo_all_blocks=1 00:10:11.017 --rc geninfo_unexecuted_blocks=1 00:10:11.017 00:10:11.017 ' 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.017 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.018 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:17.586 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:17.586 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:17.586 Found net devices under 0000:af:00.0: cvl_0_0 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:17.586 Found net devices under 0000:af:00.1: cvl_0_1 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.586 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:10:17.587 00:10:17.587 --- 10.0.0.2 ping statistics --- 00:10:17.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.587 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:10:17.587 00:10:17.587 --- 10.0.0.1 ping statistics --- 00:10:17.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.587 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.587 ************************************ 00:10:17.587 START TEST nvmf_filesystem_no_in_capsule 00:10:17.587 ************************************ 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1616128 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1616128 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1616128 ']' 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.587 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.587 [2024-12-06 11:11:49.901028] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:10:17.587 [2024-12-06 11:11:49.901073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.587 [2024-12-06 11:11:49.977004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.587 [2024-12-06 11:11:50.017862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.587 [2024-12-06 11:11:50.017898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.587 [2024-12-06 11:11:50.017906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.587 [2024-12-06 11:11:50.017912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.587 [2024-12-06 11:11:50.017917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.587 [2024-12-06 11:11:50.019359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.587 [2024-12-06 11:11:50.019474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.587 [2024-12-06 11:11:50.019583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.587 [2024-12-06 11:11:50.019584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.844 [2024-12-06 11:11:50.750577] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.844 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.113 Malloc1 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.113 [2024-12-06 11:11:50.899797] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.113 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:18.113 { 00:10:18.113 "name": "Malloc1", 00:10:18.113 "aliases": [ 00:10:18.113 "a60eb249-afe7-4f4b-b6dc-dcd28995953b" 00:10:18.113 ], 00:10:18.113 "product_name": "Malloc disk", 00:10:18.113 "block_size": 512, 00:10:18.113 "num_blocks": 1048576, 00:10:18.113 "uuid": "a60eb249-afe7-4f4b-b6dc-dcd28995953b", 00:10:18.113 "assigned_rate_limits": { 00:10:18.113 "rw_ios_per_sec": 0, 00:10:18.113 "rw_mbytes_per_sec": 0, 00:10:18.113 "r_mbytes_per_sec": 0, 00:10:18.113 "w_mbytes_per_sec": 0 00:10:18.113 }, 00:10:18.113 "claimed": true, 00:10:18.113 "claim_type": "exclusive_write", 00:10:18.113 "zoned": false, 00:10:18.113 "supported_io_types": { 00:10:18.113 "read": true, 00:10:18.113 "write": true, 00:10:18.113 "unmap": true, 00:10:18.113 "flush": true, 00:10:18.113 "reset": true, 00:10:18.113 "nvme_admin": false, 00:10:18.113 "nvme_io": false, 00:10:18.113 "nvme_io_md": false, 00:10:18.113 "write_zeroes": true, 00:10:18.113 "zcopy": true, 00:10:18.113 "get_zone_info": false, 00:10:18.113 "zone_management": false, 00:10:18.113 "zone_append": false, 00:10:18.113 "compare": false, 00:10:18.113 "compare_and_write": false, 00:10:18.113 "abort": true, 00:10:18.113 "seek_hole": false, 00:10:18.113 "seek_data": false, 00:10:18.113 "copy": true, 00:10:18.113 "nvme_iov_md": false 00:10:18.113 }, 00:10:18.113 "memory_domains": [ 00:10:18.113 { 00:10:18.113 "dma_device_id": "system", 00:10:18.113 "dma_device_type": 1 00:10:18.113 }, 00:10:18.113 { 00:10:18.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.113 "dma_device_type": 2 00:10:18.113 } 00:10:18.113 ], 00:10:18.113 "driver_specific": {} 00:10:18.113 } 00:10:18.114 ]' 00:10:18.114 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:18.114 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:18.114 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:18.114 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:18.114 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:18.114 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:18.114 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:18.114 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:19.486 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:19.486 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:19.486 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.486 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:19.486 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:21.388 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:21.646 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:21.646 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:21.905 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:22.163 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:23.538 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:23.538 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:23.538 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.538 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.538 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.538 ************************************ 00:10:23.538 START TEST filesystem_ext4 00:10:23.538 ************************************ 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:23.539 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:23.539 mke2fs 1.47.0 (5-Feb-2023) 00:10:23.539 Discarding device blocks: 0/522240 done 00:10:23.539 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:23.539 Filesystem UUID: 48e08587-2dfd-4ab0-8b0e-3e73ca221224 00:10:23.539 Superblock backups stored on blocks: 00:10:23.539 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:23.539 00:10:23.539 Allocating group tables: 0/64 done 00:10:23.539 Writing inode tables: 0/64 done 00:10:26.072 Creating journal (8192 blocks): done 00:10:28.448 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:10:28.448 00:10:28.448 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:28.448 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:33.753 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1616128 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.012 00:10:34.012 real 0m10.707s 00:10:34.012 user 0m0.034s 00:10:34.012 sys 0m0.073s 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:34.012 ************************************ 00:10:34.012 END TEST filesystem_ext4 00:10:34.012 ************************************ 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.012 ************************************ 00:10:34.012 START TEST filesystem_btrfs 00:10:34.012 ************************************ 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:34.012 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:34.580 btrfs-progs v6.8.1 00:10:34.580 See https://btrfs.readthedocs.io for more information. 00:10:34.580 00:10:34.580 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:34.580 NOTE: several default settings have changed in version 5.15, please make sure 00:10:34.580 this does not affect your deployments: 00:10:34.580 - DUP for metadata (-m dup) 00:10:34.580 - enabled no-holes (-O no-holes) 00:10:34.580 - enabled free-space-tree (-R free-space-tree) 00:10:34.580 00:10:34.580 Label: (null) 00:10:34.580 UUID: 15d118df-dc7f-422e-a545-43e841b7b2e7 00:10:34.580 Node size: 16384 00:10:34.580 Sector size: 4096 (CPU page size: 4096) 00:10:34.580 Filesystem size: 510.00MiB 00:10:34.580 Block group profiles: 00:10:34.580 Data: single 8.00MiB 00:10:34.580 Metadata: DUP 32.00MiB 00:10:34.580 System: DUP 8.00MiB 00:10:34.580 SSD detected: yes 00:10:34.580 Zoned device: no 00:10:34.580 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:34.580 Checksum: crc32c 00:10:34.580 Number of devices: 1 00:10:34.580 Devices: 00:10:34.580 ID SIZE PATH 00:10:34.580 1 510.00MiB /dev/nvme0n1p1 00:10:34.580 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1616128 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.580 00:10:34.580 real 0m0.574s 00:10:34.580 user 0m0.031s 00:10:34.580 sys 0m0.108s 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.580 ************************************ 00:10:34.580 END TEST filesystem_btrfs 00:10:34.580 ************************************ 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.580 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.840 ************************************ 00:10:34.840 START TEST filesystem_xfs 00:10:34.840 ************************************ 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:34.840 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:34.840 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:34.840 = sectsz=512 attr=2, projid32bit=1 00:10:34.840 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:34.840 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:34.840 data = bsize=4096 blocks=130560, imaxpct=25 00:10:34.840 = sunit=0 swidth=0 blks 00:10:34.840 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:34.840 log =internal log bsize=4096 blocks=16384, version=2 00:10:34.840 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:34.840 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:35.776 Discarding blocks...Done. 00:10:35.776 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:35.776 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1616128 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.149 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.425 00:10:37.425 real 0m2.558s 00:10:37.425 user 0m0.021s 00:10:37.425 sys 0m0.074s 00:10:37.425 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.425 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.425 ************************************ 00:10:37.425 END TEST filesystem_xfs 00:10:37.425 ************************************ 00:10:37.425 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:37.425 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:37.425 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.683 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.683 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:37.683 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.683 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:37.683 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:37.683 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1616128 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1616128 ']' 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1616128 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1616128 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1616128' 00:10:37.684 killing process with pid 1616128 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1616128 00:10:37.684 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1616128 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:38.250 00:10:38.250 real 0m21.050s 00:10:38.250 user 1m23.201s 00:10:38.250 sys 0m1.406s 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 ************************************ 00:10:38.250 END TEST nvmf_filesystem_no_in_capsule 00:10:38.250 ************************************ 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 ************************************ 00:10:38.250 START TEST nvmf_filesystem_in_capsule 00:10:38.250 ************************************ 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1620564 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1620564 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1620564 ']' 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.250 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 [2024-12-06 11:12:11.018609] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:10:38.250 [2024-12-06 11:12:11.018647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.250 [2024-12-06 11:12:11.097417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.250 [2024-12-06 11:12:11.132990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.250 [2024-12-06 11:12:11.133027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.250 [2024-12-06 11:12:11.133034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.250 [2024-12-06 11:12:11.133039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.250 [2024-12-06 11:12:11.133043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.250 [2024-12-06 11:12:11.134680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.250 [2024-12-06 11:12:11.134799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.250 [2024-12-06 11:12:11.134887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.250 [2024-12-06 11:12:11.134888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 [2024-12-06 11:12:11.873344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 Malloc1 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.186 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.186 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.186 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.186 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.186 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 [2024-12-06 11:12:12.019220] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.186 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:39.187 { 00:10:39.187 "name": "Malloc1", 00:10:39.187 "aliases": [ 00:10:39.187 "e529a9ae-ea75-40d7-bdbe-a33a95d9ac79" 00:10:39.187 ], 00:10:39.187 "product_name": "Malloc disk", 00:10:39.187 "block_size": 512, 00:10:39.187 "num_blocks": 1048576, 00:10:39.187 "uuid": "e529a9ae-ea75-40d7-bdbe-a33a95d9ac79", 00:10:39.187 "assigned_rate_limits": { 00:10:39.187 "rw_ios_per_sec": 0, 00:10:39.187 "rw_mbytes_per_sec": 0, 00:10:39.187 "r_mbytes_per_sec": 0, 00:10:39.187 "w_mbytes_per_sec": 0 00:10:39.187 }, 00:10:39.187 "claimed": true, 00:10:39.187 "claim_type": "exclusive_write", 00:10:39.187 "zoned": false, 00:10:39.187 "supported_io_types": { 00:10:39.187 "read": true, 00:10:39.187 "write": true, 00:10:39.187 "unmap": true, 00:10:39.187 "flush": true, 00:10:39.187 "reset": true, 00:10:39.187 "nvme_admin": false, 00:10:39.187 "nvme_io": false, 00:10:39.187 "nvme_io_md": false, 00:10:39.187 "write_zeroes": true, 00:10:39.187 "zcopy": true, 00:10:39.187 "get_zone_info": false, 00:10:39.187 "zone_management": false, 00:10:39.187 "zone_append": false, 00:10:39.187 "compare": false, 00:10:39.187 "compare_and_write": false, 00:10:39.187 "abort": true, 00:10:39.187 "seek_hole": false, 00:10:39.187 "seek_data": false, 00:10:39.187 "copy": true, 00:10:39.187 "nvme_iov_md": false 00:10:39.187 }, 00:10:39.187 "memory_domains": [ 00:10:39.187 { 00:10:39.187 "dma_device_id": "system", 00:10:39.187 "dma_device_type": 1 00:10:39.187 }, 00:10:39.187 { 00:10:39.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.187 "dma_device_type": 2 00:10:39.187 } 00:10:39.187 ], 00:10:39.187 "driver_specific": {} 00:10:39.187 } 00:10:39.187 ]' 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:39.187 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:39.446 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:39.446 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:39.446 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:39.446 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:39.446 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.823 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.823 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:40.823 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.823 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:40.823 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:42.726 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:42.984 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:43.242 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.618 ************************************ 00:10:44.618 START TEST filesystem_in_capsule_ext4 00:10:44.618 ************************************ 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:44.618 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:44.618 mke2fs 1.47.0 (5-Feb-2023) 00:10:44.618 Discarding device blocks: 0/522240 done 00:10:44.618 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:44.618 Filesystem UUID: ad38297c-0712-4a3e-922d-2cd71f9f7fc4 00:10:44.618 Superblock backups stored on blocks: 00:10:44.618 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:44.618 00:10:44.618 Allocating group tables: 0/64 done 00:10:44.618 Writing inode tables: 0/64 done 00:10:44.618 Creating journal (8192 blocks): done 00:10:46.817 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:10:46.817 00:10:46.817 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:46.817 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.087 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.087 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:52.087 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.087 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:52.087 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:52.087 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1620564 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.346 00:10:52.346 real 0m7.883s 00:10:52.346 user 0m0.027s 00:10:52.346 sys 0m0.075s 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:52.346 ************************************ 00:10:52.346 END TEST filesystem_in_capsule_ext4 00:10:52.346 ************************************ 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.346 ************************************ 00:10:52.346 START TEST filesystem_in_capsule_btrfs 00:10:52.346 ************************************ 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:52.346 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:52.604 btrfs-progs v6.8.1 00:10:52.604 See https://btrfs.readthedocs.io for more information. 00:10:52.604 00:10:52.604 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:52.604 NOTE: several default settings have changed in version 5.15, please make sure 00:10:52.604 this does not affect your deployments: 00:10:52.604 - DUP for metadata (-m dup) 00:10:52.604 - enabled no-holes (-O no-holes) 00:10:52.604 - enabled free-space-tree (-R free-space-tree) 00:10:52.604 00:10:52.604 Label: (null) 00:10:52.604 UUID: 9e228c9a-b2da-4f22-89f7-c4fd14912c2a 00:10:52.604 Node size: 16384 00:10:52.604 Sector size: 4096 (CPU page size: 4096) 00:10:52.604 Filesystem size: 510.00MiB 00:10:52.604 Block group profiles: 00:10:52.604 Data: single 8.00MiB 00:10:52.604 Metadata: DUP 32.00MiB 00:10:52.605 System: DUP 8.00MiB 00:10:52.605 SSD detected: yes 00:10:52.605 Zoned device: no 00:10:52.605 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:52.605 Checksum: crc32c 00:10:52.605 Number of devices: 1 00:10:52.605 Devices: 00:10:52.605 ID SIZE PATH 00:10:52.605 1 510.00MiB /dev/nvme0n1p1 00:10:52.605 00:10:52.605 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:52.605 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1620564 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:53.540 00:10:53.540 real 0m1.186s 00:10:53.540 user 0m0.022s 00:10:53.540 sys 0m0.119s 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:53.540 ************************************ 00:10:53.540 END TEST filesystem_in_capsule_btrfs 00:10:53.540 ************************************ 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.540 ************************************ 00:10:53.540 START TEST filesystem_in_capsule_xfs 00:10:53.540 ************************************ 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:53.540 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:53.540 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:53.540 = sectsz=512 attr=2, projid32bit=1 00:10:53.540 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:53.540 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:53.540 data = bsize=4096 blocks=130560, imaxpct=25 00:10:53.540 = sunit=0 swidth=0 blks 00:10:53.540 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:53.540 log =internal log bsize=4096 blocks=16384, version=2 00:10:53.540 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:53.540 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:54.919 Discarding blocks...Done. 00:10:54.919 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:54.919 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.820 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1620564 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.821 00:10:56.821 real 0m2.994s 00:10:56.821 user 0m0.022s 00:10:56.821 sys 0m0.078s 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.821 ************************************ 00:10:56.821 END TEST filesystem_in_capsule_xfs 00:10:56.821 ************************************ 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:56.821 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1620564 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1620564 ']' 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1620564 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1620564 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1620564' 00:10:57.081 killing process with pid 1620564 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1620564 00:10:57.081 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1620564 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:57.340 00:10:57.340 real 0m19.261s 00:10:57.340 user 1m16.077s 00:10:57.340 sys 0m1.420s 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.340 ************************************ 00:10:57.340 END TEST nvmf_filesystem_in_capsule 00:10:57.340 ************************************ 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.340 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.340 rmmod nvme_tcp 00:10:57.599 rmmod nvme_fabrics 00:10:57.599 rmmod nvme_keyring 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.599 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.501 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.501 00:10:59.501 real 0m49.110s 00:10:59.501 user 2m41.344s 00:10:59.501 sys 0m7.569s 00:10:59.501 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.501 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.501 ************************************ 00:10:59.501 END TEST nvmf_filesystem 00:10:59.501 ************************************ 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.761 ************************************ 00:10:59.761 START TEST nvmf_target_discovery 00:10:59.761 ************************************ 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:59.761 * Looking for test storage... 00:10:59.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.761 --rc genhtml_branch_coverage=1 00:10:59.761 --rc genhtml_function_coverage=1 00:10:59.761 --rc genhtml_legend=1 00:10:59.761 --rc geninfo_all_blocks=1 00:10:59.761 --rc geninfo_unexecuted_blocks=1 00:10:59.761 00:10:59.761 ' 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.761 --rc genhtml_branch_coverage=1 00:10:59.761 --rc genhtml_function_coverage=1 00:10:59.761 --rc genhtml_legend=1 00:10:59.761 --rc geninfo_all_blocks=1 00:10:59.761 --rc geninfo_unexecuted_blocks=1 00:10:59.761 00:10:59.761 ' 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.761 --rc genhtml_branch_coverage=1 00:10:59.761 --rc genhtml_function_coverage=1 00:10:59.761 --rc genhtml_legend=1 00:10:59.761 --rc geninfo_all_blocks=1 00:10:59.761 --rc geninfo_unexecuted_blocks=1 00:10:59.761 00:10:59.761 ' 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.761 --rc genhtml_branch_coverage=1 00:10:59.761 --rc genhtml_function_coverage=1 00:10:59.761 --rc genhtml_legend=1 00:10:59.761 --rc geninfo_all_blocks=1 00:10:59.761 --rc geninfo_unexecuted_blocks=1 00:10:59.761 00:10:59.761 ' 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.761 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:59.762 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.021 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.593 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:06.594 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:06.594 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:06.594 Found net devices under 0000:af:00.0: cvl_0_0 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:06.594 Found net devices under 0000:af:00.1: cvl_0_1 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:11:06.594 00:11:06.594 --- 10.0.0.2 ping statistics --- 00:11:06.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.594 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:11:06.594 00:11:06.594 --- 10.0.0.1 ping statistics --- 00:11:06.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.594 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.594 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1627934 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1627934 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1627934 ']' 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.595 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.595 [2024-12-06 11:12:38.773080] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:11:06.595 [2024-12-06 11:12:38.773116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.595 [2024-12-06 11:12:38.847808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.595 [2024-12-06 11:12:38.887929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.595 [2024-12-06 11:12:38.887965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.595 [2024-12-06 11:12:38.887972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.595 [2024-12-06 11:12:38.887977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.595 [2024-12-06 11:12:38.887982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.595 [2024-12-06 11:12:38.892076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.595 [2024-12-06 11:12:38.892107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.595 [2024-12-06 11:12:38.892216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.595 [2024-12-06 11:12:38.892217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.854 [2024-12-06 11:12:39.633578] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:06.854 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 Null1 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 [2024-12-06 11:12:39.692201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 Null2 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 Null3 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 Null4 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.855 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:07.115 00:11:07.115 Discovery Log Number of Records 6, Generation counter 6 00:11:07.115 =====Discovery Log Entry 0====== 00:11:07.115 trtype: tcp 00:11:07.115 adrfam: ipv4 00:11:07.115 subtype: current discovery subsystem 00:11:07.115 treq: not required 00:11:07.115 portid: 0 00:11:07.115 trsvcid: 4420 00:11:07.115 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:07.115 traddr: 10.0.0.2 00:11:07.115 eflags: explicit discovery connections, duplicate discovery information 00:11:07.115 sectype: none 00:11:07.115 =====Discovery Log Entry 1====== 00:11:07.115 trtype: tcp 00:11:07.115 adrfam: ipv4 00:11:07.115 subtype: nvme subsystem 00:11:07.115 treq: not required 00:11:07.115 portid: 0 00:11:07.115 trsvcid: 4420 00:11:07.115 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:07.115 traddr: 10.0.0.2 00:11:07.115 eflags: none 00:11:07.115 sectype: none 00:11:07.115 =====Discovery Log Entry 2====== 00:11:07.115 trtype: tcp 00:11:07.115 adrfam: ipv4 00:11:07.115 subtype: nvme subsystem 00:11:07.115 treq: not required 00:11:07.115 portid: 0 00:11:07.115 trsvcid: 4420 00:11:07.115 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:07.115 traddr: 10.0.0.2 00:11:07.115 eflags: none 00:11:07.115 sectype: none 00:11:07.115 =====Discovery Log Entry 3====== 00:11:07.115 trtype: tcp 00:11:07.115 adrfam: ipv4 00:11:07.115 subtype: nvme subsystem 00:11:07.115 treq: not required 00:11:07.115 portid: 0 00:11:07.115 trsvcid: 4420 00:11:07.115 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:07.115 traddr: 10.0.0.2 00:11:07.115 eflags: none 00:11:07.115 sectype: none 00:11:07.115 =====Discovery Log Entry 4====== 00:11:07.115 trtype: tcp 00:11:07.115 adrfam: ipv4 00:11:07.115 subtype: nvme subsystem 00:11:07.115 treq: not required 00:11:07.115 portid: 0 00:11:07.115 trsvcid: 4420 00:11:07.115 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:07.115 traddr: 10.0.0.2 00:11:07.115 eflags: none 00:11:07.115 sectype: none 00:11:07.115 =====Discovery Log Entry 5====== 00:11:07.115 trtype: tcp 00:11:07.115 adrfam: ipv4 00:11:07.115 subtype: discovery subsystem referral 00:11:07.115 treq: not required 00:11:07.115 portid: 0 00:11:07.115 trsvcid: 4430 00:11:07.115 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:07.115 traddr: 10.0.0.2 00:11:07.115 eflags: none 00:11:07.115 sectype: none 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:07.115 Perform nvmf subsystem discovery via RPC 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.115 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.115 [ 00:11:07.115 { 00:11:07.115 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:07.115 "subtype": "Discovery", 00:11:07.115 "listen_addresses": [ 00:11:07.115 { 00:11:07.115 "trtype": "TCP", 00:11:07.115 "adrfam": "IPv4", 00:11:07.115 "traddr": "10.0.0.2", 00:11:07.115 "trsvcid": "4420" 00:11:07.115 } 00:11:07.115 ], 00:11:07.115 "allow_any_host": true, 00:11:07.115 "hosts": [] 00:11:07.115 }, 00:11:07.115 { 00:11:07.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:07.115 "subtype": "NVMe", 00:11:07.115 "listen_addresses": [ 00:11:07.115 { 00:11:07.115 "trtype": "TCP", 00:11:07.115 "adrfam": "IPv4", 00:11:07.115 "traddr": "10.0.0.2", 00:11:07.115 "trsvcid": "4420" 00:11:07.115 } 00:11:07.115 ], 00:11:07.115 "allow_any_host": true, 00:11:07.115 "hosts": [], 00:11:07.115 "serial_number": "SPDK00000000000001", 00:11:07.115 "model_number": "SPDK bdev Controller", 00:11:07.115 "max_namespaces": 32, 00:11:07.115 "min_cntlid": 1, 00:11:07.115 "max_cntlid": 65519, 00:11:07.115 "namespaces": [ 00:11:07.115 { 00:11:07.115 "nsid": 1, 00:11:07.115 "bdev_name": "Null1", 00:11:07.115 "name": "Null1", 00:11:07.115 "nguid": "CB6A8617252D42F298F31103309D7681", 00:11:07.115 "uuid": "cb6a8617-252d-42f2-98f3-1103309d7681" 00:11:07.115 } 00:11:07.115 ] 00:11:07.115 }, 00:11:07.115 { 00:11:07.115 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:07.115 "subtype": "NVMe", 00:11:07.115 "listen_addresses": [ 00:11:07.115 { 00:11:07.115 "trtype": "TCP", 00:11:07.115 "adrfam": "IPv4", 00:11:07.115 "traddr": "10.0.0.2", 00:11:07.115 "trsvcid": "4420" 00:11:07.115 } 00:11:07.115 ], 00:11:07.115 "allow_any_host": true, 00:11:07.115 "hosts": [], 00:11:07.115 "serial_number": "SPDK00000000000002", 00:11:07.115 "model_number": "SPDK bdev Controller", 00:11:07.115 "max_namespaces": 32, 00:11:07.115 "min_cntlid": 1, 00:11:07.115 "max_cntlid": 65519, 00:11:07.115 "namespaces": [ 00:11:07.115 { 00:11:07.115 "nsid": 1, 00:11:07.115 "bdev_name": "Null2", 00:11:07.115 "name": "Null2", 00:11:07.115 "nguid": "C492722868404599A3AC6E28C2190BA0", 00:11:07.115 "uuid": "c4927228-6840-4599-a3ac-6e28c2190ba0" 00:11:07.115 } 00:11:07.115 ] 00:11:07.115 }, 00:11:07.115 { 00:11:07.115 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:07.115 "subtype": "NVMe", 00:11:07.115 "listen_addresses": [ 00:11:07.115 { 00:11:07.115 "trtype": "TCP", 00:11:07.115 "adrfam": "IPv4", 00:11:07.115 "traddr": "10.0.0.2", 00:11:07.115 "trsvcid": "4420" 00:11:07.115 } 00:11:07.115 ], 00:11:07.115 "allow_any_host": true, 00:11:07.115 "hosts": [], 00:11:07.115 "serial_number": "SPDK00000000000003", 00:11:07.115 "model_number": "SPDK bdev Controller", 00:11:07.115 "max_namespaces": 32, 00:11:07.115 "min_cntlid": 1, 00:11:07.115 "max_cntlid": 65519, 00:11:07.115 "namespaces": [ 00:11:07.115 { 00:11:07.115 "nsid": 1, 00:11:07.115 "bdev_name": "Null3", 00:11:07.115 "name": "Null3", 00:11:07.115 "nguid": "01E2A7DD1C81497CA0001F06351746B9", 00:11:07.115 "uuid": "01e2a7dd-1c81-497c-a000-1f06351746b9" 00:11:07.115 } 00:11:07.115 ] 00:11:07.115 }, 00:11:07.115 { 00:11:07.115 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:07.116 "subtype": "NVMe", 00:11:07.116 "listen_addresses": [ 00:11:07.116 { 00:11:07.116 "trtype": "TCP", 00:11:07.116 "adrfam": "IPv4", 00:11:07.116 "traddr": "10.0.0.2", 00:11:07.116 "trsvcid": "4420" 00:11:07.116 } 00:11:07.116 ], 00:11:07.116 "allow_any_host": true, 00:11:07.116 "hosts": [], 00:11:07.116 "serial_number": "SPDK00000000000004", 00:11:07.116 "model_number": "SPDK bdev Controller", 00:11:07.116 "max_namespaces": 32, 00:11:07.116 "min_cntlid": 1, 00:11:07.116 "max_cntlid": 65519, 00:11:07.116 "namespaces": [ 00:11:07.116 { 00:11:07.116 "nsid": 1, 00:11:07.116 "bdev_name": "Null4", 00:11:07.116 "name": "Null4", 00:11:07.116 "nguid": "E34097EBA5514848A40398D302CF0417", 00:11:07.116 "uuid": "e34097eb-a551-4848-a403-98d302cf0417" 00:11:07.116 } 00:11:07.116 ] 00:11:07.116 } 00:11:07.116 ] 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.116 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.116 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.376 rmmod nvme_tcp 00:11:07.376 rmmod nvme_fabrics 00:11:07.376 rmmod nvme_keyring 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1627934 ']' 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1627934 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1627934 ']' 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1627934 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1627934 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1627934' 00:11:07.376 killing process with pid 1627934 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1627934 00:11:07.376 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1627934 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.635 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.543 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.543 00:11:09.543 real 0m9.961s 00:11:09.543 user 0m7.973s 00:11:09.543 sys 0m4.851s 00:11:09.543 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.543 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.543 ************************************ 00:11:09.543 END TEST nvmf_target_discovery 00:11:09.543 ************************************ 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.803 ************************************ 00:11:09.803 START TEST nvmf_referrals 00:11:09.803 ************************************ 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:09.803 * Looking for test storage... 00:11:09.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:09.803 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.804 --rc genhtml_branch_coverage=1 00:11:09.804 --rc genhtml_function_coverage=1 00:11:09.804 --rc genhtml_legend=1 00:11:09.804 --rc geninfo_all_blocks=1 00:11:09.804 --rc geninfo_unexecuted_blocks=1 00:11:09.804 00:11:09.804 ' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.804 --rc genhtml_branch_coverage=1 00:11:09.804 --rc genhtml_function_coverage=1 00:11:09.804 --rc genhtml_legend=1 00:11:09.804 --rc geninfo_all_blocks=1 00:11:09.804 --rc geninfo_unexecuted_blocks=1 00:11:09.804 00:11:09.804 ' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.804 --rc genhtml_branch_coverage=1 00:11:09.804 --rc genhtml_function_coverage=1 00:11:09.804 --rc genhtml_legend=1 00:11:09.804 --rc geninfo_all_blocks=1 00:11:09.804 --rc geninfo_unexecuted_blocks=1 00:11:09.804 00:11:09.804 ' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.804 --rc genhtml_branch_coverage=1 00:11:09.804 --rc genhtml_function_coverage=1 00:11:09.804 --rc genhtml_legend=1 00:11:09.804 --rc geninfo_all_blocks=1 00:11:09.804 --rc geninfo_unexecuted_blocks=1 00:11:09.804 00:11:09.804 ' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.804 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.805 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.805 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:16.375 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:16.375 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:16.375 Found net devices under 0000:af:00.0: cvl_0_0 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:16.375 Found net devices under 0000:af:00.1: cvl_0_1 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:11:16.375 00:11:16.375 --- 10.0.0.2 ping statistics --- 00:11:16.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.375 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:11:16.375 00:11:16.375 --- 10.0.0.1 ping statistics --- 00:11:16.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.375 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1631754 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1631754 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1631754 ']' 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.375 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.375 [2024-12-06 11:12:48.778879] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:11:16.375 [2024-12-06 11:12:48.778929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.375 [2024-12-06 11:12:48.856867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.375 [2024-12-06 11:12:48.896472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.375 [2024-12-06 11:12:48.896509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.375 [2024-12-06 11:12:48.896518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.375 [2024-12-06 11:12:48.896523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.375 [2024-12-06 11:12:48.896528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.375 [2024-12-06 11:12:48.898113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.375 [2024-12-06 11:12:48.898226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.375 [2024-12-06 11:12:48.898339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.375 [2024-12-06 11:12:48.898340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.942 [2024-12-06 11:12:49.633079] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.942 [2024-12-06 11:12:49.646007] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:16.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:17.200 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:17.201 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:17.458 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:17.459 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:17.459 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:17.459 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:17.459 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:17.459 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:17.459 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:17.716 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:17.716 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:17.716 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:17.716 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:17.716 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:17.716 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:17.975 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:18.233 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:18.233 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:18.233 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:18.233 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:18.233 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:18.233 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:18.234 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:18.234 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:18.234 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:18.234 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:18.234 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:18.234 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:18.234 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:18.493 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.752 rmmod nvme_tcp 00:11:18.752 rmmod nvme_fabrics 00:11:18.752 rmmod nvme_keyring 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1631754 ']' 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1631754 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1631754 ']' 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1631754 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631754 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631754' 00:11:18.752 killing process with pid 1631754 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1631754 00:11:18.752 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1631754 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.012 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.549 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.549 00:11:21.549 real 0m11.349s 00:11:21.549 user 0m14.202s 00:11:21.549 sys 0m5.266s 00:11:21.549 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.549 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.549 ************************************ 00:11:21.549 END TEST nvmf_referrals 00:11:21.549 ************************************ 00:11:21.549 11:12:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:21.549 11:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.549 11:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.549 11:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.549 ************************************ 00:11:21.549 START TEST nvmf_connect_disconnect 00:11:21.549 ************************************ 00:11:21.549 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:21.549 * Looking for test storage... 00:11:21.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:21.549 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.550 --rc genhtml_branch_coverage=1 00:11:21.550 --rc genhtml_function_coverage=1 00:11:21.550 --rc genhtml_legend=1 00:11:21.550 --rc geninfo_all_blocks=1 00:11:21.550 --rc geninfo_unexecuted_blocks=1 00:11:21.550 00:11:21.550 ' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.550 --rc genhtml_branch_coverage=1 00:11:21.550 --rc genhtml_function_coverage=1 00:11:21.550 --rc genhtml_legend=1 00:11:21.550 --rc geninfo_all_blocks=1 00:11:21.550 --rc geninfo_unexecuted_blocks=1 00:11:21.550 00:11:21.550 ' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.550 --rc genhtml_branch_coverage=1 00:11:21.550 --rc genhtml_function_coverage=1 00:11:21.550 --rc genhtml_legend=1 00:11:21.550 --rc geninfo_all_blocks=1 00:11:21.550 --rc geninfo_unexecuted_blocks=1 00:11:21.550 00:11:21.550 ' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.550 --rc genhtml_branch_coverage=1 00:11:21.550 --rc genhtml_function_coverage=1 00:11:21.550 --rc genhtml_legend=1 00:11:21.550 --rc geninfo_all_blocks=1 00:11:21.550 --rc geninfo_unexecuted_blocks=1 00:11:21.550 00:11:21.550 ' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.550 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:28.125 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:28.125 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:28.125 Found net devices under 0000:af:00.0: cvl_0_0 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:28.125 Found net devices under 0000:af:00.1: cvl_0_1 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.125 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.126 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:11:28.126 00:11:28.126 --- 10.0.0.2 ping statistics --- 00:11:28.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.126 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:28.126 00:11:28.126 --- 10.0.0.1 ping statistics --- 00:11:28.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.126 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1636058 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1636058 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1636058 ']' 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.126 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:28.126 [2024-12-06 11:13:00.266536] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:11:28.126 [2024-12-06 11:13:00.266577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.126 [2024-12-06 11:13:00.343757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.126 [2024-12-06 11:13:00.383969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.126 [2024-12-06 11:13:00.384005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.126 [2024-12-06 11:13:00.384012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.126 [2024-12-06 11:13:00.384017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.126 [2024-12-06 11:13:00.384022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.126 [2024-12-06 11:13:00.389077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.126 [2024-12-06 11:13:00.389106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.126 [2024-12-06 11:13:00.389220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.126 [2024-12-06 11:13:00.389219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:28.385 [2024-12-06 11:13:01.126947] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:28.385 [2024-12-06 11:13:01.186723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:28.385 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:31.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.945 rmmod nvme_tcp 00:11:45.945 rmmod nvme_fabrics 00:11:45.945 rmmod nvme_keyring 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1636058 ']' 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1636058 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1636058 ']' 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1636058 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1636058 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1636058' 00:11:45.945 killing process with pid 1636058 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1636058 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1636058 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.946 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.853 00:11:47.853 real 0m26.542s 00:11:47.853 user 1m13.420s 00:11:47.853 sys 0m5.928s 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.853 ************************************ 00:11:47.853 END TEST nvmf_connect_disconnect 00:11:47.853 ************************************ 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.853 ************************************ 00:11:47.853 START TEST nvmf_multitarget 00:11:47.853 ************************************ 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:47.853 * Looking for test storage... 00:11:47.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.853 --rc genhtml_branch_coverage=1 00:11:47.853 --rc genhtml_function_coverage=1 00:11:47.853 --rc genhtml_legend=1 00:11:47.853 --rc geninfo_all_blocks=1 00:11:47.853 --rc geninfo_unexecuted_blocks=1 00:11:47.853 00:11:47.853 ' 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.853 --rc genhtml_branch_coverage=1 00:11:47.853 --rc genhtml_function_coverage=1 00:11:47.853 --rc genhtml_legend=1 00:11:47.853 --rc geninfo_all_blocks=1 00:11:47.853 --rc geninfo_unexecuted_blocks=1 00:11:47.853 00:11:47.853 ' 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.853 --rc genhtml_branch_coverage=1 00:11:47.853 --rc genhtml_function_coverage=1 00:11:47.853 --rc genhtml_legend=1 00:11:47.853 --rc geninfo_all_blocks=1 00:11:47.853 --rc geninfo_unexecuted_blocks=1 00:11:47.853 00:11:47.853 ' 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.853 --rc genhtml_branch_coverage=1 00:11:47.853 --rc genhtml_function_coverage=1 00:11:47.853 --rc genhtml_legend=1 00:11:47.853 --rc geninfo_all_blocks=1 00:11:47.853 --rc geninfo_unexecuted_blocks=1 00:11:47.853 00:11:47.853 ' 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.853 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.854 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:54.433 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:54.433 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:54.433 Found net devices under 0000:af:00.0: cvl_0_0 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:54.433 Found net devices under 0000:af:00.1: cvl_0_1 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:54.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:11:54.433 00:11:54.433 --- 10.0.0.2 ping statistics --- 00:11:54.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.433 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:11:54.433 00:11:54.433 --- 10.0.0.1 ping statistics --- 00:11:54.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.433 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.433 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1643061 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1643061 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1643061 ']' 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.434 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:54.434 [2024-12-06 11:13:26.809557] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:11:54.434 [2024-12-06 11:13:26.809601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.434 [2024-12-06 11:13:26.884493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.434 [2024-12-06 11:13:26.924262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.434 [2024-12-06 11:13:26.924300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.434 [2024-12-06 11:13:26.924306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.434 [2024-12-06 11:13:26.924312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.434 [2024-12-06 11:13:26.924317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.434 [2024-12-06 11:13:26.925682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.434 [2024-12-06 11:13:26.925800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.434 [2024-12-06 11:13:26.925910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.434 [2024-12-06 11:13:26.925911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:54.434 "nvmf_tgt_1" 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:54.434 "nvmf_tgt_2" 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:54.434 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:54.692 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:54.692 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:54.692 true 00:11:54.692 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:54.950 true 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.950 rmmod nvme_tcp 00:11:54.950 rmmod nvme_fabrics 00:11:54.950 rmmod nvme_keyring 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1643061 ']' 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1643061 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1643061 ']' 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1643061 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.950 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1643061 00:11:55.209 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.209 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.209 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1643061' 00:11:55.209 killing process with pid 1643061 00:11:55.209 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1643061 00:11:55.209 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1643061 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.209 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.743 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.744 00:11:57.744 real 0m9.568s 00:11:57.744 user 0m6.987s 00:11:57.744 sys 0m4.900s 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:57.744 ************************************ 00:11:57.744 END TEST nvmf_multitarget 00:11:57.744 ************************************ 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.744 ************************************ 00:11:57.744 START TEST nvmf_rpc 00:11:57.744 ************************************ 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:57.744 * Looking for test storage... 00:11:57.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:57.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.744 --rc genhtml_branch_coverage=1 00:11:57.744 --rc genhtml_function_coverage=1 00:11:57.744 --rc genhtml_legend=1 00:11:57.744 --rc geninfo_all_blocks=1 00:11:57.744 --rc geninfo_unexecuted_blocks=1 00:11:57.744 00:11:57.744 ' 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:57.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.744 --rc genhtml_branch_coverage=1 00:11:57.744 --rc genhtml_function_coverage=1 00:11:57.744 --rc genhtml_legend=1 00:11:57.744 --rc geninfo_all_blocks=1 00:11:57.744 --rc geninfo_unexecuted_blocks=1 00:11:57.744 00:11:57.744 ' 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:57.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.744 --rc genhtml_branch_coverage=1 00:11:57.744 --rc genhtml_function_coverage=1 00:11:57.744 --rc genhtml_legend=1 00:11:57.744 --rc geninfo_all_blocks=1 00:11:57.744 --rc geninfo_unexecuted_blocks=1 00:11:57.744 00:11:57.744 ' 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:57.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.744 --rc genhtml_branch_coverage=1 00:11:57.744 --rc genhtml_function_coverage=1 00:11:57.744 --rc genhtml_legend=1 00:11:57.744 --rc geninfo_all_blocks=1 00:11:57.744 --rc geninfo_unexecuted_blocks=1 00:11:57.744 00:11:57.744 ' 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.744 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.745 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:04.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:04.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.316 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:04.317 Found net devices under 0000:af:00.0: cvl_0_0 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:04.317 Found net devices under 0000:af:00.1: cvl_0_1 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:04.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:12:04.317 00:12:04.317 --- 10.0.0.2 ping statistics --- 00:12:04.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.317 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:12:04.317 00:12:04.317 --- 10.0.0.1 ping statistics --- 00:12:04.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.317 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1646843 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1646843 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1646843 ']' 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.317 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.317 [2024-12-06 11:13:36.477484] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:12:04.317 [2024-12-06 11:13:36.477533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.317 [2024-12-06 11:13:36.552029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.317 [2024-12-06 11:13:36.592116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.317 [2024-12-06 11:13:36.592153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.317 [2024-12-06 11:13:36.592159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.317 [2024-12-06 11:13:36.592165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.317 [2024-12-06 11:13:36.592169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.317 [2024-12-06 11:13:36.593750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.317 [2024-12-06 11:13:36.593864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.317 [2024-12-06 11:13:36.593979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.317 [2024-12-06 11:13:36.593980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.576 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.576 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:04.576 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.576 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.576 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.576 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.576 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:04.577 "tick_rate": 2200000000, 00:12:04.577 "poll_groups": [ 00:12:04.577 { 00:12:04.577 "name": "nvmf_tgt_poll_group_000", 00:12:04.577 "admin_qpairs": 0, 00:12:04.577 "io_qpairs": 0, 00:12:04.577 "current_admin_qpairs": 0, 00:12:04.577 "current_io_qpairs": 0, 00:12:04.577 "pending_bdev_io": 0, 00:12:04.577 "completed_nvme_io": 0, 00:12:04.577 "transports": [] 00:12:04.577 }, 00:12:04.577 { 00:12:04.577 "name": "nvmf_tgt_poll_group_001", 00:12:04.577 "admin_qpairs": 0, 00:12:04.577 "io_qpairs": 0, 00:12:04.577 "current_admin_qpairs": 0, 00:12:04.577 "current_io_qpairs": 0, 00:12:04.577 "pending_bdev_io": 0, 00:12:04.577 "completed_nvme_io": 0, 00:12:04.577 "transports": [] 00:12:04.577 }, 00:12:04.577 { 00:12:04.577 "name": "nvmf_tgt_poll_group_002", 00:12:04.577 "admin_qpairs": 0, 00:12:04.577 "io_qpairs": 0, 00:12:04.577 "current_admin_qpairs": 0, 00:12:04.577 "current_io_qpairs": 0, 00:12:04.577 "pending_bdev_io": 0, 00:12:04.577 "completed_nvme_io": 0, 00:12:04.577 "transports": [] 00:12:04.577 }, 00:12:04.577 { 00:12:04.577 "name": "nvmf_tgt_poll_group_003", 00:12:04.577 "admin_qpairs": 0, 00:12:04.577 "io_qpairs": 0, 00:12:04.577 "current_admin_qpairs": 0, 00:12:04.577 "current_io_qpairs": 0, 00:12:04.577 "pending_bdev_io": 0, 00:12:04.577 "completed_nvme_io": 0, 00:12:04.577 "transports": [] 00:12:04.577 } 00:12:04.577 ] 00:12:04.577 }' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.577 [2024-12-06 11:13:37.441115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:04.577 "tick_rate": 2200000000, 00:12:04.577 "poll_groups": [ 00:12:04.577 { 00:12:04.577 "name": "nvmf_tgt_poll_group_000", 00:12:04.577 "admin_qpairs": 0, 00:12:04.577 "io_qpairs": 0, 00:12:04.577 "current_admin_qpairs": 0, 00:12:04.577 "current_io_qpairs": 0, 00:12:04.577 "pending_bdev_io": 0, 00:12:04.577 "completed_nvme_io": 0, 00:12:04.577 "transports": [ 00:12:04.577 { 00:12:04.577 "trtype": "TCP" 00:12:04.577 } 00:12:04.577 ] 00:12:04.577 }, 00:12:04.577 { 00:12:04.577 "name": "nvmf_tgt_poll_group_001", 00:12:04.577 "admin_qpairs": 0, 00:12:04.577 "io_qpairs": 0, 00:12:04.577 "current_admin_qpairs": 0, 00:12:04.577 "current_io_qpairs": 0, 00:12:04.577 "pending_bdev_io": 0, 00:12:04.577 "completed_nvme_io": 0, 00:12:04.577 "transports": [ 00:12:04.577 { 00:12:04.577 "trtype": "TCP" 00:12:04.577 } 00:12:04.577 ] 00:12:04.577 }, 00:12:04.577 { 00:12:04.577 "name": "nvmf_tgt_poll_group_002", 00:12:04.577 "admin_qpairs": 0, 00:12:04.577 "io_qpairs": 0, 00:12:04.577 "current_admin_qpairs": 0, 00:12:04.577 "current_io_qpairs": 0, 00:12:04.577 "pending_bdev_io": 0, 00:12:04.577 "completed_nvme_io": 0, 00:12:04.577 "transports": [ 00:12:04.577 { 00:12:04.577 "trtype": "TCP" 00:12:04.577 } 00:12:04.577 ] 00:12:04.577 }, 00:12:04.577 { 00:12:04.577 "name": "nvmf_tgt_poll_group_003", 00:12:04.577 "admin_qpairs": 0, 00:12:04.577 "io_qpairs": 0, 00:12:04.577 "current_admin_qpairs": 0, 00:12:04.577 "current_io_qpairs": 0, 00:12:04.577 "pending_bdev_io": 0, 00:12:04.577 "completed_nvme_io": 0, 00:12:04.577 "transports": [ 00:12:04.577 { 00:12:04.577 "trtype": "TCP" 00:12:04.577 } 00:12:04.577 ] 00:12:04.577 } 00:12:04.577 ] 00:12:04.577 }' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:04.577 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.837 Malloc1 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.837 [2024-12-06 11:13:37.627638] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:04.837 [2024-12-06 11:13:37.662303] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:12:04.837 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:04.837 could not add new controller: failed to write to nvme-fabrics device 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.837 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:06.215 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.215 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:06.215 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.215 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:06.215 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:08.119 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:08.119 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:08.119 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.378 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.379 [2024-12-06 11:13:41.240281] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:12:08.379 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:08.379 could not add new controller: failed to write to nvme-fabrics device 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.379 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.757 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.757 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:09.757 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.757 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:09.757 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:11.661 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:11.661 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:11.661 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.661 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:11.661 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.661 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:11.661 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.920 [2024-12-06 11:13:44.715911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.920 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.296 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.296 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:13.296 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.296 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:13.296 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.193 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.193 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.193 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.193 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:15.193 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.193 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:15.193 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.450 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.451 [2024-12-06 11:13:48.202825] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.451 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.824 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.824 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:16.824 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.824 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:16.824 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:18.727 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:18.727 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:18.727 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.727 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:18.727 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.727 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:18.727 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.987 [2024-12-06 11:13:51.731403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.987 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.365 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.365 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:20.365 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.365 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:20.365 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.268 [2024-12-06 11:13:55.164759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.268 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.650 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.650 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:23.650 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.650 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:23.650 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:25.553 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:25.553 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:25.553 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.553 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:25.553 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.553 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:25.553 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.812 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.813 [2024-12-06 11:13:58.596219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.813 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.191 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.191 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:27.191 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.191 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:27.191 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:29.093 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:29.093 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:29.093 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.093 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:29.093 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.093 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:29.093 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 [2024-12-06 11:14:02.174645] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 [2024-12-06 11:14:02.222725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 [2024-12-06 11:14:02.270859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.353 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 [2024-12-06 11:14:02.319016] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 [2024-12-06 11:14:02.367184] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:29.613 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:29.614 "tick_rate": 2200000000, 00:12:29.614 "poll_groups": [ 00:12:29.614 { 00:12:29.614 "name": "nvmf_tgt_poll_group_000", 00:12:29.614 "admin_qpairs": 2, 00:12:29.614 "io_qpairs": 196, 00:12:29.614 "current_admin_qpairs": 0, 00:12:29.614 "current_io_qpairs": 0, 00:12:29.614 "pending_bdev_io": 0, 00:12:29.614 "completed_nvme_io": 247, 00:12:29.614 "transports": [ 00:12:29.614 { 00:12:29.614 "trtype": "TCP" 00:12:29.614 } 00:12:29.614 ] 00:12:29.614 }, 00:12:29.614 { 00:12:29.614 "name": "nvmf_tgt_poll_group_001", 00:12:29.614 "admin_qpairs": 2, 00:12:29.614 "io_qpairs": 196, 00:12:29.614 "current_admin_qpairs": 0, 00:12:29.614 "current_io_qpairs": 0, 00:12:29.614 "pending_bdev_io": 0, 00:12:29.614 "completed_nvme_io": 247, 00:12:29.614 "transports": [ 00:12:29.614 { 00:12:29.614 "trtype": "TCP" 00:12:29.614 } 00:12:29.614 ] 00:12:29.614 }, 00:12:29.614 { 00:12:29.614 "name": "nvmf_tgt_poll_group_002", 00:12:29.614 "admin_qpairs": 1, 00:12:29.614 "io_qpairs": 196, 00:12:29.614 "current_admin_qpairs": 0, 00:12:29.614 "current_io_qpairs": 0, 00:12:29.614 "pending_bdev_io": 0, 00:12:29.614 "completed_nvme_io": 246, 00:12:29.614 "transports": [ 00:12:29.614 { 00:12:29.614 "trtype": "TCP" 00:12:29.614 } 00:12:29.614 ] 00:12:29.614 }, 00:12:29.614 { 00:12:29.614 "name": "nvmf_tgt_poll_group_003", 00:12:29.614 "admin_qpairs": 2, 00:12:29.614 "io_qpairs": 196, 00:12:29.614 "current_admin_qpairs": 0, 00:12:29.614 "current_io_qpairs": 0, 00:12:29.614 "pending_bdev_io": 0, 00:12:29.614 "completed_nvme_io": 394, 00:12:29.614 "transports": [ 00:12:29.614 { 00:12:29.614 "trtype": "TCP" 00:12:29.614 } 00:12:29.614 ] 00:12:29.614 } 00:12:29.614 ] 00:12:29.614 }' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.614 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.614 rmmod nvme_tcp 00:12:29.614 rmmod nvme_fabrics 00:12:29.873 rmmod nvme_keyring 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1646843 ']' 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1646843 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1646843 ']' 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1646843 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1646843 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1646843' 00:12:29.873 killing process with pid 1646843 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1646843 00:12:29.873 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1646843 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.132 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.036 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.036 00:12:32.036 real 0m34.685s 00:12:32.036 user 1m46.342s 00:12:32.036 sys 0m6.563s 00:12:32.036 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.036 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.036 ************************************ 00:12:32.036 END TEST nvmf_rpc 00:12:32.036 ************************************ 00:12:32.036 11:14:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:32.036 11:14:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.036 11:14:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.036 11:14:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.036 ************************************ 00:12:32.036 START TEST nvmf_invalid 00:12:32.036 ************************************ 00:12:32.036 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:32.299 * Looking for test storage... 00:12:32.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:32.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.299 --rc genhtml_branch_coverage=1 00:12:32.299 --rc genhtml_function_coverage=1 00:12:32.299 --rc genhtml_legend=1 00:12:32.299 --rc geninfo_all_blocks=1 00:12:32.299 --rc geninfo_unexecuted_blocks=1 00:12:32.299 00:12:32.299 ' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:32.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.299 --rc genhtml_branch_coverage=1 00:12:32.299 --rc genhtml_function_coverage=1 00:12:32.299 --rc genhtml_legend=1 00:12:32.299 --rc geninfo_all_blocks=1 00:12:32.299 --rc geninfo_unexecuted_blocks=1 00:12:32.299 00:12:32.299 ' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:32.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.299 --rc genhtml_branch_coverage=1 00:12:32.299 --rc genhtml_function_coverage=1 00:12:32.299 --rc genhtml_legend=1 00:12:32.299 --rc geninfo_all_blocks=1 00:12:32.299 --rc geninfo_unexecuted_blocks=1 00:12:32.299 00:12:32.299 ' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:32.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.299 --rc genhtml_branch_coverage=1 00:12:32.299 --rc genhtml_function_coverage=1 00:12:32.299 --rc genhtml_legend=1 00:12:32.299 --rc geninfo_all_blocks=1 00:12:32.299 --rc geninfo_unexecuted_blocks=1 00:12:32.299 00:12:32.299 ' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.299 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:38.966 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.966 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.966 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.966 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.966 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.966 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.966 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:38.967 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:38.967 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:38.967 Found net devices under 0000:af:00.0: cvl_0_0 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:38.967 Found net devices under 0000:af:00.1: cvl_0_1 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.967 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:12:38.967 00:12:38.967 --- 10.0.0.2 ping statistics --- 00:12:38.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.967 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:12:38.967 00:12:38.967 --- 10.0.0.1 ping statistics --- 00:12:38.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.967 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.967 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1655453 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1655453 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1655453 ']' 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.968 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:38.968 [2024-12-06 11:14:11.258926] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:12:38.968 [2024-12-06 11:14:11.258972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.968 [2024-12-06 11:14:11.335155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.968 [2024-12-06 11:14:11.374841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.968 [2024-12-06 11:14:11.374875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.968 [2024-12-06 11:14:11.374882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.968 [2024-12-06 11:14:11.374887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.968 [2024-12-06 11:14:11.374891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.968 [2024-12-06 11:14:11.376437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.968 [2024-12-06 11:14:11.376554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.968 [2024-12-06 11:14:11.376584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.968 [2024-12-06 11:14:11.376586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.228 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.228 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:39.228 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:39.228 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.228 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:39.228 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.228 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:39.228 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31686 00:12:39.487 [2024-12-06 11:14:12.279829] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:39.487 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:39.487 { 00:12:39.487 "nqn": "nqn.2016-06.io.spdk:cnode31686", 00:12:39.487 "tgt_name": "foobar", 00:12:39.487 "method": "nvmf_create_subsystem", 00:12:39.487 "req_id": 1 00:12:39.487 } 00:12:39.487 Got JSON-RPC error response 00:12:39.487 response: 00:12:39.487 { 00:12:39.487 "code": -32603, 00:12:39.487 "message": "Unable to find target foobar" 00:12:39.487 }' 00:12:39.487 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:39.487 { 00:12:39.487 "nqn": "nqn.2016-06.io.spdk:cnode31686", 00:12:39.487 "tgt_name": "foobar", 00:12:39.487 "method": "nvmf_create_subsystem", 00:12:39.487 "req_id": 1 00:12:39.487 } 00:12:39.487 Got JSON-RPC error response 00:12:39.487 response: 00:12:39.487 { 00:12:39.487 "code": -32603, 00:12:39.487 "message": "Unable to find target foobar" 00:12:39.487 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:39.487 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:39.487 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16847 00:12:39.745 [2024-12-06 11:14:12.480525] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16847: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:39.745 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:39.745 { 00:12:39.745 "nqn": "nqn.2016-06.io.spdk:cnode16847", 00:12:39.745 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:39.745 "method": "nvmf_create_subsystem", 00:12:39.745 "req_id": 1 00:12:39.745 } 00:12:39.745 Got JSON-RPC error response 00:12:39.745 response: 00:12:39.745 { 00:12:39.745 "code": -32602, 00:12:39.745 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:39.745 }' 00:12:39.745 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:39.745 { 00:12:39.745 "nqn": "nqn.2016-06.io.spdk:cnode16847", 00:12:39.745 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:39.745 "method": "nvmf_create_subsystem", 00:12:39.745 "req_id": 1 00:12:39.745 } 00:12:39.745 Got JSON-RPC error response 00:12:39.745 response: 00:12:39.745 { 00:12:39.745 "code": -32602, 00:12:39.745 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:39.745 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:39.745 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:39.745 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11840 00:12:39.745 [2024-12-06 11:14:12.673113] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11840: invalid model number 'SPDK_Controller' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:40.005 { 00:12:40.005 "nqn": "nqn.2016-06.io.spdk:cnode11840", 00:12:40.005 "model_number": "SPDK_Controller\u001f", 00:12:40.005 "method": "nvmf_create_subsystem", 00:12:40.005 "req_id": 1 00:12:40.005 } 00:12:40.005 Got JSON-RPC error response 00:12:40.005 response: 00:12:40.005 { 00:12:40.005 "code": -32602, 00:12:40.005 "message": "Invalid MN SPDK_Controller\u001f" 00:12:40.005 }' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:40.005 { 00:12:40.005 "nqn": "nqn.2016-06.io.spdk:cnode11840", 00:12:40.005 "model_number": "SPDK_Controller\u001f", 00:12:40.005 "method": "nvmf_create_subsystem", 00:12:40.005 "req_id": 1 00:12:40.005 } 00:12:40.005 Got JSON-RPC error response 00:12:40.005 response: 00:12:40.005 { 00:12:40.005 "code": -32602, 00:12:40.005 "message": "Invalid MN SPDK_Controller\u001f" 00:12:40.005 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.005 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ',Jm|"9M+qatw!E;jPDP4N' 00:12:40.006 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ',Jm|"9M+qatw!E;jPDP4N' nqn.2016-06.io.spdk:cnode30659 00:12:40.266 [2024-12-06 11:14:13.002168] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30659: invalid serial number ',Jm|"9M+qatw!E;jPDP4N' 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:40.266 { 00:12:40.266 "nqn": "nqn.2016-06.io.spdk:cnode30659", 00:12:40.266 "serial_number": ",Jm|\"9M+qatw!E;jPDP4N", 00:12:40.266 "method": "nvmf_create_subsystem", 00:12:40.266 "req_id": 1 00:12:40.266 } 00:12:40.266 Got JSON-RPC error response 00:12:40.266 response: 00:12:40.266 { 00:12:40.266 "code": -32602, 00:12:40.266 "message": "Invalid SN ,Jm|\"9M+qatw!E;jPDP4N" 00:12:40.266 }' 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:40.266 { 00:12:40.266 "nqn": "nqn.2016-06.io.spdk:cnode30659", 00:12:40.266 "serial_number": ",Jm|\"9M+qatw!E;jPDP4N", 00:12:40.266 "method": "nvmf_create_subsystem", 00:12:40.266 "req_id": 1 00:12:40.266 } 00:12:40.266 Got JSON-RPC error response 00:12:40.266 response: 00:12:40.266 { 00:12:40.266 "code": -32602, 00:12:40.266 "message": "Invalid SN ,Jm|\"9M+qatw!E;jPDP4N" 00:12:40.266 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:40.266 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:40.267 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:40.268 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.268 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '8ZFfbhv)EH>X#1,lf}V\9~0xb*EblY?]0?X"DB5O*' 00:12:40.527 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '8ZFfbhv)EH>X#1,lf}V\9~0xb*EblY?]0?X"DB5O*' nqn.2016-06.io.spdk:cnode15058 00:12:40.527 [2024-12-06 11:14:13.463689] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15058: invalid model number '8ZFfbhv)EH>X#1,lf}V\9~0xb*EblY?]0?X"DB5O*' 00:12:40.786 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:40.786 { 00:12:40.786 "nqn": "nqn.2016-06.io.spdk:cnode15058", 00:12:40.786 "model_number": "8ZFfbhv)EH>X#1,lf}V\\9~0xb*EblY?]0?X\"DB5O*", 00:12:40.786 "method": "nvmf_create_subsystem", 00:12:40.786 "req_id": 1 00:12:40.786 } 00:12:40.786 Got JSON-RPC error response 00:12:40.786 response: 00:12:40.786 { 00:12:40.786 "code": -32602, 00:12:40.786 "message": "Invalid MN 8ZFfbhv)EH>X#1,lf}V\\9~0xb*EblY?]0?X\"DB5O*" 00:12:40.786 }' 00:12:40.786 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:40.786 { 00:12:40.786 "nqn": "nqn.2016-06.io.spdk:cnode15058", 00:12:40.786 "model_number": "8ZFfbhv)EH>X#1,lf}V\\9~0xb*EblY?]0?X\"DB5O*", 00:12:40.786 "method": "nvmf_create_subsystem", 00:12:40.786 "req_id": 1 00:12:40.786 } 00:12:40.786 Got JSON-RPC error response 00:12:40.786 response: 00:12:40.786 { 00:12:40.786 "code": -32602, 00:12:40.786 "message": "Invalid MN 8ZFfbhv)EH>X#1,lf}V\\9~0xb*EblY?]0?X\"DB5O*" 00:12:40.786 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:40.786 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:40.786 [2024-12-06 11:14:13.656402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.786 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:41.046 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:41.046 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:41.046 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:41.046 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:41.046 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:41.304 [2024-12-06 11:14:14.050621] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:41.304 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:41.304 { 00:12:41.304 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:41.304 "listen_address": { 00:12:41.304 "trtype": "tcp", 00:12:41.304 "traddr": "", 00:12:41.304 "trsvcid": "4421" 00:12:41.304 }, 00:12:41.304 "method": "nvmf_subsystem_remove_listener", 00:12:41.304 "req_id": 1 00:12:41.304 } 00:12:41.304 Got JSON-RPC error response 00:12:41.304 response: 00:12:41.304 { 00:12:41.304 "code": -32602, 00:12:41.304 "message": "Invalid parameters" 00:12:41.304 }' 00:12:41.304 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:41.304 { 00:12:41.304 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:41.304 "listen_address": { 00:12:41.304 "trtype": "tcp", 00:12:41.304 "traddr": "", 00:12:41.304 "trsvcid": "4421" 00:12:41.304 }, 00:12:41.304 "method": "nvmf_subsystem_remove_listener", 00:12:41.304 "req_id": 1 00:12:41.304 } 00:12:41.304 Got JSON-RPC error response 00:12:41.304 response: 00:12:41.304 { 00:12:41.304 "code": -32602, 00:12:41.304 "message": "Invalid parameters" 00:12:41.304 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:41.304 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30960 -i 0 00:12:41.304 [2024-12-06 11:14:14.235206] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30960: invalid cntlid range [0-65519] 00:12:41.563 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:41.563 { 00:12:41.563 "nqn": "nqn.2016-06.io.spdk:cnode30960", 00:12:41.563 "min_cntlid": 0, 00:12:41.563 "method": "nvmf_create_subsystem", 00:12:41.563 "req_id": 1 00:12:41.563 } 00:12:41.563 Got JSON-RPC error response 00:12:41.563 response: 00:12:41.563 { 00:12:41.563 "code": -32602, 00:12:41.563 "message": "Invalid cntlid range [0-65519]" 00:12:41.563 }' 00:12:41.563 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:41.563 { 00:12:41.563 "nqn": "nqn.2016-06.io.spdk:cnode30960", 00:12:41.563 "min_cntlid": 0, 00:12:41.563 "method": "nvmf_create_subsystem", 00:12:41.563 "req_id": 1 00:12:41.563 } 00:12:41.563 Got JSON-RPC error response 00:12:41.563 response: 00:12:41.563 { 00:12:41.563 "code": -32602, 00:12:41.563 "message": "Invalid cntlid range [0-65519]" 00:12:41.563 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.563 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13407 -i 65520 00:12:41.563 [2024-12-06 11:14:14.419835] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13407: invalid cntlid range [65520-65519] 00:12:41.563 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:41.563 { 00:12:41.563 "nqn": "nqn.2016-06.io.spdk:cnode13407", 00:12:41.563 "min_cntlid": 65520, 00:12:41.563 "method": "nvmf_create_subsystem", 00:12:41.563 "req_id": 1 00:12:41.563 } 00:12:41.563 Got JSON-RPC error response 00:12:41.563 response: 00:12:41.563 { 00:12:41.563 "code": -32602, 00:12:41.563 "message": "Invalid cntlid range [65520-65519]" 00:12:41.563 }' 00:12:41.563 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:41.563 { 00:12:41.563 "nqn": "nqn.2016-06.io.spdk:cnode13407", 00:12:41.563 "min_cntlid": 65520, 00:12:41.563 "method": "nvmf_create_subsystem", 00:12:41.563 "req_id": 1 00:12:41.563 } 00:12:41.563 Got JSON-RPC error response 00:12:41.563 response: 00:12:41.563 { 00:12:41.563 "code": -32602, 00:12:41.563 "message": "Invalid cntlid range [65520-65519]" 00:12:41.563 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.563 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20083 -I 0 00:12:41.822 [2024-12-06 11:14:14.608426] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20083: invalid cntlid range [1-0] 00:12:41.822 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:41.822 { 00:12:41.822 "nqn": "nqn.2016-06.io.spdk:cnode20083", 00:12:41.822 "max_cntlid": 0, 00:12:41.822 "method": "nvmf_create_subsystem", 00:12:41.822 "req_id": 1 00:12:41.822 } 00:12:41.822 Got JSON-RPC error response 00:12:41.822 response: 00:12:41.822 { 00:12:41.822 "code": -32602, 00:12:41.822 "message": "Invalid cntlid range [1-0]" 00:12:41.822 }' 00:12:41.822 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:41.822 { 00:12:41.822 "nqn": "nqn.2016-06.io.spdk:cnode20083", 00:12:41.822 "max_cntlid": 0, 00:12:41.822 "method": "nvmf_create_subsystem", 00:12:41.822 "req_id": 1 00:12:41.822 } 00:12:41.822 Got JSON-RPC error response 00:12:41.822 response: 00:12:41.822 { 00:12:41.822 "code": -32602, 00:12:41.822 "message": "Invalid cntlid range [1-0]" 00:12:41.822 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.822 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25261 -I 65520 00:12:42.081 [2024-12-06 11:14:14.801078] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25261: invalid cntlid range [1-65520] 00:12:42.081 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:42.081 { 00:12:42.081 "nqn": "nqn.2016-06.io.spdk:cnode25261", 00:12:42.081 "max_cntlid": 65520, 00:12:42.081 "method": "nvmf_create_subsystem", 00:12:42.081 "req_id": 1 00:12:42.081 } 00:12:42.081 Got JSON-RPC error response 00:12:42.081 response: 00:12:42.081 { 00:12:42.081 "code": -32602, 00:12:42.081 "message": "Invalid cntlid range [1-65520]" 00:12:42.081 }' 00:12:42.081 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:42.081 { 00:12:42.081 "nqn": "nqn.2016-06.io.spdk:cnode25261", 00:12:42.081 "max_cntlid": 65520, 00:12:42.081 "method": "nvmf_create_subsystem", 00:12:42.081 "req_id": 1 00:12:42.081 } 00:12:42.081 Got JSON-RPC error response 00:12:42.081 response: 00:12:42.081 { 00:12:42.081 "code": -32602, 00:12:42.081 "message": "Invalid cntlid range [1-65520]" 00:12:42.081 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:42.081 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16444 -i 6 -I 5 00:12:42.081 [2024-12-06 11:14:14.981676] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16444: invalid cntlid range [6-5] 00:12:42.081 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:42.081 { 00:12:42.081 "nqn": "nqn.2016-06.io.spdk:cnode16444", 00:12:42.081 "min_cntlid": 6, 00:12:42.081 "max_cntlid": 5, 00:12:42.081 "method": "nvmf_create_subsystem", 00:12:42.081 "req_id": 1 00:12:42.081 } 00:12:42.081 Got JSON-RPC error response 00:12:42.081 response: 00:12:42.081 { 00:12:42.081 "code": -32602, 00:12:42.081 "message": "Invalid cntlid range [6-5]" 00:12:42.081 }' 00:12:42.081 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:42.081 { 00:12:42.081 "nqn": "nqn.2016-06.io.spdk:cnode16444", 00:12:42.081 "min_cntlid": 6, 00:12:42.081 "max_cntlid": 5, 00:12:42.081 "method": "nvmf_create_subsystem", 00:12:42.081 "req_id": 1 00:12:42.081 } 00:12:42.081 Got JSON-RPC error response 00:12:42.081 response: 00:12:42.081 { 00:12:42.081 "code": -32602, 00:12:42.081 "message": "Invalid cntlid range [6-5]" 00:12:42.081 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:42.081 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:42.338 { 00:12:42.338 "name": "foobar", 00:12:42.338 "method": "nvmf_delete_target", 00:12:42.338 "req_id": 1 00:12:42.338 } 00:12:42.338 Got JSON-RPC error response 00:12:42.338 response: 00:12:42.338 { 00:12:42.338 "code": -32602, 00:12:42.338 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:42.338 }' 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:42.338 { 00:12:42.338 "name": "foobar", 00:12:42.338 "method": "nvmf_delete_target", 00:12:42.338 "req_id": 1 00:12:42.338 } 00:12:42.338 Got JSON-RPC error response 00:12:42.338 response: 00:12:42.338 { 00:12:42.338 "code": -32602, 00:12:42.338 "message": "The specified target doesn't exist, cannot delete it." 00:12:42.338 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.338 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.339 rmmod nvme_tcp 00:12:42.339 rmmod nvme_fabrics 00:12:42.339 rmmod nvme_keyring 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1655453 ']' 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1655453 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1655453 ']' 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1655453 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1655453 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1655453' 00:12:42.339 killing process with pid 1655453 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1655453 00:12:42.339 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1655453 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.598 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.132 00:12:45.132 real 0m12.521s 00:12:45.132 user 0m20.406s 00:12:45.132 sys 0m5.418s 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.132 ************************************ 00:12:45.132 END TEST nvmf_invalid 00:12:45.132 ************************************ 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.132 ************************************ 00:12:45.132 START TEST nvmf_connect_stress 00:12:45.132 ************************************ 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:45.132 * Looking for test storage... 00:12:45.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:45.132 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:45.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.133 --rc genhtml_branch_coverage=1 00:12:45.133 --rc genhtml_function_coverage=1 00:12:45.133 --rc genhtml_legend=1 00:12:45.133 --rc geninfo_all_blocks=1 00:12:45.133 --rc geninfo_unexecuted_blocks=1 00:12:45.133 00:12:45.133 ' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:45.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.133 --rc genhtml_branch_coverage=1 00:12:45.133 --rc genhtml_function_coverage=1 00:12:45.133 --rc genhtml_legend=1 00:12:45.133 --rc geninfo_all_blocks=1 00:12:45.133 --rc geninfo_unexecuted_blocks=1 00:12:45.133 00:12:45.133 ' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:45.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.133 --rc genhtml_branch_coverage=1 00:12:45.133 --rc genhtml_function_coverage=1 00:12:45.133 --rc genhtml_legend=1 00:12:45.133 --rc geninfo_all_blocks=1 00:12:45.133 --rc geninfo_unexecuted_blocks=1 00:12:45.133 00:12:45.133 ' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:45.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.133 --rc genhtml_branch_coverage=1 00:12:45.133 --rc genhtml_function_coverage=1 00:12:45.133 --rc genhtml_legend=1 00:12:45.133 --rc geninfo_all_blocks=1 00:12:45.133 --rc geninfo_unexecuted_blocks=1 00:12:45.133 00:12:45.133 ' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.133 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:51.702 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.702 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:51.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:51.703 Found net devices under 0000:af:00.0: cvl_0_0 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:51.703 Found net devices under 0000:af:00.1: cvl_0_1 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:12:51.703 00:12:51.703 --- 10.0.0.2 ping statistics --- 00:12:51.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.703 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:12:51.703 00:12:51.703 --- 10.0.0.1 ping statistics --- 00:12:51.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.703 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1659893 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1659893 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1659893 ']' 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.703 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.703 [2024-12-06 11:14:23.816663] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:12:51.703 [2024-12-06 11:14:23.816705] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.703 [2024-12-06 11:14:23.894183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:51.703 [2024-12-06 11:14:23.931683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.703 [2024-12-06 11:14:23.931718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.703 [2024-12-06 11:14:23.931724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.703 [2024-12-06 11:14:23.931732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.703 [2024-12-06 11:14:23.931736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.703 [2024-12-06 11:14:23.933085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.703 [2024-12-06 11:14:23.933157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.703 [2024-12-06 11:14:23.933158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.703 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.703 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:51.703 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.704 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.704 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.962 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.963 [2024-12-06 11:14:24.674526] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.963 [2024-12-06 11:14:24.694719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.963 NULL1 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1660172 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.963 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.221 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.221 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:52.221 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.221 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.221 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.787 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.787 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:52.787 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.787 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.787 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.046 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.046 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:53.046 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.046 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.046 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.304 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.304 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:53.304 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.304 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.304 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.563 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.563 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:53.563 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.563 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.563 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.821 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.821 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:53.821 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.821 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.821 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.387 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.387 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:54.387 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.387 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.387 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.652 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.652 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:54.652 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.652 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.652 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.910 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.911 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:54.911 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.911 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.911 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.169 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.169 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:55.169 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.169 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.169 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.736 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.736 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:55.736 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.736 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.736 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.994 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.994 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:55.994 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.994 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.994 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.252 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.252 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:56.252 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.252 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.252 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.510 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.510 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:56.510 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.510 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.510 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.768 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.768 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:56.768 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.768 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.768 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.336 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.336 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:57.336 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.336 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.336 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.594 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.594 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:57.594 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.594 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.594 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.852 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.852 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:57.852 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.852 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.852 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.110 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.110 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:58.110 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.110 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.110 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.368 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.368 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:58.368 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.368 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.368 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.935 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.935 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:58.935 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.935 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.935 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.194 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.194 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:59.194 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.194 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.194 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.453 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.453 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:59.453 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.453 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.453 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.712 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.712 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:12:59.712 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.712 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.712 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.279 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.279 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:13:00.280 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.280 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.280 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.538 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.539 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:13:00.539 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.539 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.539 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.797 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.797 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:13:00.797 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.797 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.797 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.056 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.056 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:13:01.056 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.056 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.056 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.314 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.314 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:13:01.314 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.314 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.314 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.880 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.880 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:13:01.880 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.880 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.880 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.139 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.139 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:13:02.139 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.139 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.139 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.139 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1660172 00:13:02.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1660172) - No such process 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1660172 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:02.397 rmmod nvme_tcp 00:13:02.397 rmmod nvme_fabrics 00:13:02.397 rmmod nvme_keyring 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1659893 ']' 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1659893 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1659893 ']' 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1659893 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.397 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659893 00:13:02.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:02.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:02.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659893' 00:13:02.655 killing process with pid 1659893 00:13:02.655 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1659893 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1659893 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.656 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:05.191 00:13:05.191 real 0m20.014s 00:13:05.191 user 0m42.371s 00:13:05.191 sys 0m8.656s 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.191 ************************************ 00:13:05.191 END TEST nvmf_connect_stress 00:13:05.191 ************************************ 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:05.191 ************************************ 00:13:05.191 START TEST nvmf_fused_ordering 00:13:05.191 ************************************ 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:05.191 * Looking for test storage... 00:13:05.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.191 --rc genhtml_branch_coverage=1 00:13:05.191 --rc genhtml_function_coverage=1 00:13:05.191 --rc genhtml_legend=1 00:13:05.191 --rc geninfo_all_blocks=1 00:13:05.191 --rc geninfo_unexecuted_blocks=1 00:13:05.191 00:13:05.191 ' 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.191 --rc genhtml_branch_coverage=1 00:13:05.191 --rc genhtml_function_coverage=1 00:13:05.191 --rc genhtml_legend=1 00:13:05.191 --rc geninfo_all_blocks=1 00:13:05.191 --rc geninfo_unexecuted_blocks=1 00:13:05.191 00:13:05.191 ' 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.191 --rc genhtml_branch_coverage=1 00:13:05.191 --rc genhtml_function_coverage=1 00:13:05.191 --rc genhtml_legend=1 00:13:05.191 --rc geninfo_all_blocks=1 00:13:05.191 --rc geninfo_unexecuted_blocks=1 00:13:05.191 00:13:05.191 ' 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.191 --rc genhtml_branch_coverage=1 00:13:05.191 --rc genhtml_function_coverage=1 00:13:05.191 --rc genhtml_legend=1 00:13:05.191 --rc geninfo_all_blocks=1 00:13:05.191 --rc geninfo_unexecuted_blocks=1 00:13:05.191 00:13:05.191 ' 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.191 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:05.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:05.192 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:11.759 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:11.759 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.759 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:11.760 Found net devices under 0000:af:00.0: cvl_0_0 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:11.760 Found net devices under 0000:af:00.1: cvl_0_1 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:13:11.760 00:13:11.760 --- 10.0.0.2 ping statistics --- 00:13:11.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.760 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:13:11.760 00:13:11.760 --- 10.0.0.1 ping statistics --- 00:13:11.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.760 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1665749 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1665749 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1665749 ']' 00:13:11.760 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.761 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.761 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.761 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.761 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.761 [2024-12-06 11:14:43.892851] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:13:11.761 [2024-12-06 11:14:43.892892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.761 [2024-12-06 11:14:43.967767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.761 [2024-12-06 11:14:44.007575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.761 [2024-12-06 11:14:44.007607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.761 [2024-12-06 11:14:44.007614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.761 [2024-12-06 11:14:44.007619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.761 [2024-12-06 11:14:44.007624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.761 [2024-12-06 11:14:44.008208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:12.019 [2024-12-06 11:14:44.748876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:12.019 [2024-12-06 11:14:44.769071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.019 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:12.020 NULL1 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.020 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:12.020 [2024-12-06 11:14:44.829202] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:13:12.020 [2024-12-06 11:14:44.829246] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665874 ] 00:13:12.278 Attached to nqn.2016-06.io.spdk:cnode1 00:13:12.278 Namespace ID: 1 size: 1GB 00:13:12.278 fused_ordering(0) 00:13:12.278 fused_ordering(1) 00:13:12.278 fused_ordering(2) 00:13:12.278 fused_ordering(3) 00:13:12.278 fused_ordering(4) 00:13:12.278 fused_ordering(5) 00:13:12.278 fused_ordering(6) 00:13:12.278 fused_ordering(7) 00:13:12.278 fused_ordering(8) 00:13:12.278 fused_ordering(9) 00:13:12.278 fused_ordering(10) 00:13:12.278 fused_ordering(11) 00:13:12.278 fused_ordering(12) 00:13:12.279 fused_ordering(13) 00:13:12.279 fused_ordering(14) 00:13:12.279 fused_ordering(15) 00:13:12.279 fused_ordering(16) 00:13:12.279 fused_ordering(17) 00:13:12.279 fused_ordering(18) 00:13:12.279 fused_ordering(19) 00:13:12.279 fused_ordering(20) 00:13:12.279 fused_ordering(21) 00:13:12.279 fused_ordering(22) 00:13:12.279 fused_ordering(23) 00:13:12.279 fused_ordering(24) 00:13:12.279 fused_ordering(25) 00:13:12.279 fused_ordering(26) 00:13:12.279 fused_ordering(27) 00:13:12.279 fused_ordering(28) 00:13:12.279 fused_ordering(29) 00:13:12.279 fused_ordering(30) 00:13:12.279 fused_ordering(31) 00:13:12.279 fused_ordering(32) 00:13:12.279 fused_ordering(33) 00:13:12.279 fused_ordering(34) 00:13:12.279 fused_ordering(35) 00:13:12.279 fused_ordering(36) 00:13:12.279 fused_ordering(37) 00:13:12.279 fused_ordering(38) 00:13:12.279 fused_ordering(39) 00:13:12.279 fused_ordering(40) 00:13:12.279 fused_ordering(41) 00:13:12.279 fused_ordering(42) 00:13:12.279 fused_ordering(43) 00:13:12.279 fused_ordering(44) 00:13:12.279 fused_ordering(45) 00:13:12.279 fused_ordering(46) 00:13:12.279 fused_ordering(47) 00:13:12.279 fused_ordering(48) 00:13:12.279 fused_ordering(49) 00:13:12.279 fused_ordering(50) 00:13:12.279 fused_ordering(51) 00:13:12.279 fused_ordering(52) 00:13:12.279 fused_ordering(53) 00:13:12.279 fused_ordering(54) 00:13:12.279 fused_ordering(55) 00:13:12.279 fused_ordering(56) 00:13:12.279 fused_ordering(57) 00:13:12.279 fused_ordering(58) 00:13:12.279 fused_ordering(59) 00:13:12.279 fused_ordering(60) 00:13:12.279 fused_ordering(61) 00:13:12.279 fused_ordering(62) 00:13:12.279 fused_ordering(63) 00:13:12.279 fused_ordering(64) 00:13:12.279 fused_ordering(65) 00:13:12.279 fused_ordering(66) 00:13:12.279 fused_ordering(67) 00:13:12.279 fused_ordering(68) 00:13:12.279 fused_ordering(69) 00:13:12.279 fused_ordering(70) 00:13:12.279 fused_ordering(71) 00:13:12.279 fused_ordering(72) 00:13:12.279 fused_ordering(73) 00:13:12.279 fused_ordering(74) 00:13:12.279 fused_ordering(75) 00:13:12.279 fused_ordering(76) 00:13:12.279 fused_ordering(77) 00:13:12.279 fused_ordering(78) 00:13:12.279 fused_ordering(79) 00:13:12.279 fused_ordering(80) 00:13:12.279 fused_ordering(81) 00:13:12.279 fused_ordering(82) 00:13:12.279 fused_ordering(83) 00:13:12.279 fused_ordering(84) 00:13:12.279 fused_ordering(85) 00:13:12.279 fused_ordering(86) 00:13:12.279 fused_ordering(87) 00:13:12.279 fused_ordering(88) 00:13:12.279 fused_ordering(89) 00:13:12.279 fused_ordering(90) 00:13:12.279 fused_ordering(91) 00:13:12.279 fused_ordering(92) 00:13:12.279 fused_ordering(93) 00:13:12.279 fused_ordering(94) 00:13:12.279 fused_ordering(95) 00:13:12.279 fused_ordering(96) 00:13:12.279 fused_ordering(97) 00:13:12.279 fused_ordering(98) 00:13:12.279 fused_ordering(99) 00:13:12.279 fused_ordering(100) 00:13:12.279 fused_ordering(101) 00:13:12.279 fused_ordering(102) 00:13:12.279 fused_ordering(103) 00:13:12.279 fused_ordering(104) 00:13:12.279 fused_ordering(105) 00:13:12.279 fused_ordering(106) 00:13:12.279 fused_ordering(107) 00:13:12.279 fused_ordering(108) 00:13:12.279 fused_ordering(109) 00:13:12.279 fused_ordering(110) 00:13:12.279 fused_ordering(111) 00:13:12.279 fused_ordering(112) 00:13:12.279 fused_ordering(113) 00:13:12.279 fused_ordering(114) 00:13:12.279 fused_ordering(115) 00:13:12.279 fused_ordering(116) 00:13:12.279 fused_ordering(117) 00:13:12.279 fused_ordering(118) 00:13:12.279 fused_ordering(119) 00:13:12.279 fused_ordering(120) 00:13:12.279 fused_ordering(121) 00:13:12.279 fused_ordering(122) 00:13:12.279 fused_ordering(123) 00:13:12.279 fused_ordering(124) 00:13:12.279 fused_ordering(125) 00:13:12.279 fused_ordering(126) 00:13:12.279 fused_ordering(127) 00:13:12.279 fused_ordering(128) 00:13:12.279 fused_ordering(129) 00:13:12.279 fused_ordering(130) 00:13:12.279 fused_ordering(131) 00:13:12.279 fused_ordering(132) 00:13:12.279 fused_ordering(133) 00:13:12.279 fused_ordering(134) 00:13:12.279 fused_ordering(135) 00:13:12.279 fused_ordering(136) 00:13:12.279 fused_ordering(137) 00:13:12.279 fused_ordering(138) 00:13:12.279 fused_ordering(139) 00:13:12.279 fused_ordering(140) 00:13:12.279 fused_ordering(141) 00:13:12.279 fused_ordering(142) 00:13:12.279 fused_ordering(143) 00:13:12.279 fused_ordering(144) 00:13:12.279 fused_ordering(145) 00:13:12.279 fused_ordering(146) 00:13:12.279 fused_ordering(147) 00:13:12.279 fused_ordering(148) 00:13:12.279 fused_ordering(149) 00:13:12.279 fused_ordering(150) 00:13:12.279 fused_ordering(151) 00:13:12.279 fused_ordering(152) 00:13:12.279 fused_ordering(153) 00:13:12.279 fused_ordering(154) 00:13:12.279 fused_ordering(155) 00:13:12.279 fused_ordering(156) 00:13:12.279 fused_ordering(157) 00:13:12.279 fused_ordering(158) 00:13:12.279 fused_ordering(159) 00:13:12.279 fused_ordering(160) 00:13:12.279 fused_ordering(161) 00:13:12.279 fused_ordering(162) 00:13:12.279 fused_ordering(163) 00:13:12.279 fused_ordering(164) 00:13:12.279 fused_ordering(165) 00:13:12.279 fused_ordering(166) 00:13:12.279 fused_ordering(167) 00:13:12.279 fused_ordering(168) 00:13:12.279 fused_ordering(169) 00:13:12.279 fused_ordering(170) 00:13:12.279 fused_ordering(171) 00:13:12.279 fused_ordering(172) 00:13:12.279 fused_ordering(173) 00:13:12.279 fused_ordering(174) 00:13:12.279 fused_ordering(175) 00:13:12.279 fused_ordering(176) 00:13:12.279 fused_ordering(177) 00:13:12.279 fused_ordering(178) 00:13:12.279 fused_ordering(179) 00:13:12.279 fused_ordering(180) 00:13:12.279 fused_ordering(181) 00:13:12.279 fused_ordering(182) 00:13:12.279 fused_ordering(183) 00:13:12.279 fused_ordering(184) 00:13:12.279 fused_ordering(185) 00:13:12.279 fused_ordering(186) 00:13:12.279 fused_ordering(187) 00:13:12.279 fused_ordering(188) 00:13:12.279 fused_ordering(189) 00:13:12.279 fused_ordering(190) 00:13:12.279 fused_ordering(191) 00:13:12.279 fused_ordering(192) 00:13:12.279 fused_ordering(193) 00:13:12.280 fused_ordering(194) 00:13:12.280 fused_ordering(195) 00:13:12.280 fused_ordering(196) 00:13:12.280 fused_ordering(197) 00:13:12.280 fused_ordering(198) 00:13:12.280 fused_ordering(199) 00:13:12.280 fused_ordering(200) 00:13:12.280 fused_ordering(201) 00:13:12.280 fused_ordering(202) 00:13:12.280 fused_ordering(203) 00:13:12.280 fused_ordering(204) 00:13:12.280 fused_ordering(205) 00:13:12.538 fused_ordering(206) 00:13:12.538 fused_ordering(207) 00:13:12.538 fused_ordering(208) 00:13:12.538 fused_ordering(209) 00:13:12.538 fused_ordering(210) 00:13:12.538 fused_ordering(211) 00:13:12.538 fused_ordering(212) 00:13:12.538 fused_ordering(213) 00:13:12.538 fused_ordering(214) 00:13:12.538 fused_ordering(215) 00:13:12.538 fused_ordering(216) 00:13:12.538 fused_ordering(217) 00:13:12.538 fused_ordering(218) 00:13:12.538 fused_ordering(219) 00:13:12.538 fused_ordering(220) 00:13:12.538 fused_ordering(221) 00:13:12.538 fused_ordering(222) 00:13:12.538 fused_ordering(223) 00:13:12.538 fused_ordering(224) 00:13:12.538 fused_ordering(225) 00:13:12.538 fused_ordering(226) 00:13:12.538 fused_ordering(227) 00:13:12.538 fused_ordering(228) 00:13:12.538 fused_ordering(229) 00:13:12.538 fused_ordering(230) 00:13:12.538 fused_ordering(231) 00:13:12.538 fused_ordering(232) 00:13:12.538 fused_ordering(233) 00:13:12.538 fused_ordering(234) 00:13:12.538 fused_ordering(235) 00:13:12.538 fused_ordering(236) 00:13:12.538 fused_ordering(237) 00:13:12.538 fused_ordering(238) 00:13:12.538 fused_ordering(239) 00:13:12.538 fused_ordering(240) 00:13:12.538 fused_ordering(241) 00:13:12.538 fused_ordering(242) 00:13:12.538 fused_ordering(243) 00:13:12.538 fused_ordering(244) 00:13:12.538 fused_ordering(245) 00:13:12.538 fused_ordering(246) 00:13:12.538 fused_ordering(247) 00:13:12.538 fused_ordering(248) 00:13:12.538 fused_ordering(249) 00:13:12.538 fused_ordering(250) 00:13:12.538 fused_ordering(251) 00:13:12.538 fused_ordering(252) 00:13:12.538 fused_ordering(253) 00:13:12.538 fused_ordering(254) 00:13:12.538 fused_ordering(255) 00:13:12.538 fused_ordering(256) 00:13:12.538 fused_ordering(257) 00:13:12.538 fused_ordering(258) 00:13:12.538 fused_ordering(259) 00:13:12.538 fused_ordering(260) 00:13:12.538 fused_ordering(261) 00:13:12.538 fused_ordering(262) 00:13:12.538 fused_ordering(263) 00:13:12.538 fused_ordering(264) 00:13:12.538 fused_ordering(265) 00:13:12.538 fused_ordering(266) 00:13:12.538 fused_ordering(267) 00:13:12.538 fused_ordering(268) 00:13:12.538 fused_ordering(269) 00:13:12.538 fused_ordering(270) 00:13:12.538 fused_ordering(271) 00:13:12.538 fused_ordering(272) 00:13:12.538 fused_ordering(273) 00:13:12.538 fused_ordering(274) 00:13:12.538 fused_ordering(275) 00:13:12.538 fused_ordering(276) 00:13:12.538 fused_ordering(277) 00:13:12.538 fused_ordering(278) 00:13:12.538 fused_ordering(279) 00:13:12.538 fused_ordering(280) 00:13:12.538 fused_ordering(281) 00:13:12.538 fused_ordering(282) 00:13:12.538 fused_ordering(283) 00:13:12.538 fused_ordering(284) 00:13:12.538 fused_ordering(285) 00:13:12.538 fused_ordering(286) 00:13:12.538 fused_ordering(287) 00:13:12.538 fused_ordering(288) 00:13:12.538 fused_ordering(289) 00:13:12.538 fused_ordering(290) 00:13:12.538 fused_ordering(291) 00:13:12.538 fused_ordering(292) 00:13:12.538 fused_ordering(293) 00:13:12.538 fused_ordering(294) 00:13:12.538 fused_ordering(295) 00:13:12.538 fused_ordering(296) 00:13:12.538 fused_ordering(297) 00:13:12.538 fused_ordering(298) 00:13:12.538 fused_ordering(299) 00:13:12.538 fused_ordering(300) 00:13:12.538 fused_ordering(301) 00:13:12.538 fused_ordering(302) 00:13:12.538 fused_ordering(303) 00:13:12.538 fused_ordering(304) 00:13:12.538 fused_ordering(305) 00:13:12.538 fused_ordering(306) 00:13:12.538 fused_ordering(307) 00:13:12.538 fused_ordering(308) 00:13:12.538 fused_ordering(309) 00:13:12.538 fused_ordering(310) 00:13:12.538 fused_ordering(311) 00:13:12.538 fused_ordering(312) 00:13:12.538 fused_ordering(313) 00:13:12.538 fused_ordering(314) 00:13:12.538 fused_ordering(315) 00:13:12.538 fused_ordering(316) 00:13:12.538 fused_ordering(317) 00:13:12.538 fused_ordering(318) 00:13:12.538 fused_ordering(319) 00:13:12.538 fused_ordering(320) 00:13:12.538 fused_ordering(321) 00:13:12.538 fused_ordering(322) 00:13:12.538 fused_ordering(323) 00:13:12.538 fused_ordering(324) 00:13:12.538 fused_ordering(325) 00:13:12.538 fused_ordering(326) 00:13:12.538 fused_ordering(327) 00:13:12.538 fused_ordering(328) 00:13:12.538 fused_ordering(329) 00:13:12.538 fused_ordering(330) 00:13:12.538 fused_ordering(331) 00:13:12.538 fused_ordering(332) 00:13:12.538 fused_ordering(333) 00:13:12.538 fused_ordering(334) 00:13:12.538 fused_ordering(335) 00:13:12.538 fused_ordering(336) 00:13:12.538 fused_ordering(337) 00:13:12.538 fused_ordering(338) 00:13:12.538 fused_ordering(339) 00:13:12.538 fused_ordering(340) 00:13:12.538 fused_ordering(341) 00:13:12.538 fused_ordering(342) 00:13:12.538 fused_ordering(343) 00:13:12.538 fused_ordering(344) 00:13:12.538 fused_ordering(345) 00:13:12.538 fused_ordering(346) 00:13:12.538 fused_ordering(347) 00:13:12.538 fused_ordering(348) 00:13:12.538 fused_ordering(349) 00:13:12.538 fused_ordering(350) 00:13:12.538 fused_ordering(351) 00:13:12.538 fused_ordering(352) 00:13:12.538 fused_ordering(353) 00:13:12.538 fused_ordering(354) 00:13:12.538 fused_ordering(355) 00:13:12.538 fused_ordering(356) 00:13:12.538 fused_ordering(357) 00:13:12.538 fused_ordering(358) 00:13:12.538 fused_ordering(359) 00:13:12.538 fused_ordering(360) 00:13:12.538 fused_ordering(361) 00:13:12.538 fused_ordering(362) 00:13:12.538 fused_ordering(363) 00:13:12.538 fused_ordering(364) 00:13:12.538 fused_ordering(365) 00:13:12.538 fused_ordering(366) 00:13:12.538 fused_ordering(367) 00:13:12.538 fused_ordering(368) 00:13:12.538 fused_ordering(369) 00:13:12.538 fused_ordering(370) 00:13:12.538 fused_ordering(371) 00:13:12.538 fused_ordering(372) 00:13:12.538 fused_ordering(373) 00:13:12.539 fused_ordering(374) 00:13:12.539 fused_ordering(375) 00:13:12.539 fused_ordering(376) 00:13:12.539 fused_ordering(377) 00:13:12.539 fused_ordering(378) 00:13:12.539 fused_ordering(379) 00:13:12.539 fused_ordering(380) 00:13:12.539 fused_ordering(381) 00:13:12.539 fused_ordering(382) 00:13:12.539 fused_ordering(383) 00:13:12.539 fused_ordering(384) 00:13:12.539 fused_ordering(385) 00:13:12.539 fused_ordering(386) 00:13:12.539 fused_ordering(387) 00:13:12.539 fused_ordering(388) 00:13:12.539 fused_ordering(389) 00:13:12.539 fused_ordering(390) 00:13:12.539 fused_ordering(391) 00:13:12.539 fused_ordering(392) 00:13:12.539 fused_ordering(393) 00:13:12.539 fused_ordering(394) 00:13:12.539 fused_ordering(395) 00:13:12.539 fused_ordering(396) 00:13:12.539 fused_ordering(397) 00:13:12.539 fused_ordering(398) 00:13:12.539 fused_ordering(399) 00:13:12.539 fused_ordering(400) 00:13:12.539 fused_ordering(401) 00:13:12.539 fused_ordering(402) 00:13:12.539 fused_ordering(403) 00:13:12.539 fused_ordering(404) 00:13:12.539 fused_ordering(405) 00:13:12.539 fused_ordering(406) 00:13:12.539 fused_ordering(407) 00:13:12.539 fused_ordering(408) 00:13:12.539 fused_ordering(409) 00:13:12.539 fused_ordering(410) 00:13:12.797 fused_ordering(411) 00:13:12.797 fused_ordering(412) 00:13:12.797 fused_ordering(413) 00:13:12.797 fused_ordering(414) 00:13:12.797 fused_ordering(415) 00:13:12.797 fused_ordering(416) 00:13:12.797 fused_ordering(417) 00:13:12.797 fused_ordering(418) 00:13:12.797 fused_ordering(419) 00:13:12.797 fused_ordering(420) 00:13:12.797 fused_ordering(421) 00:13:12.797 fused_ordering(422) 00:13:12.797 fused_ordering(423) 00:13:12.797 fused_ordering(424) 00:13:12.797 fused_ordering(425) 00:13:12.797 fused_ordering(426) 00:13:12.797 fused_ordering(427) 00:13:12.797 fused_ordering(428) 00:13:12.797 fused_ordering(429) 00:13:12.797 fused_ordering(430) 00:13:12.797 fused_ordering(431) 00:13:12.797 fused_ordering(432) 00:13:12.797 fused_ordering(433) 00:13:12.797 fused_ordering(434) 00:13:12.797 fused_ordering(435) 00:13:12.797 fused_ordering(436) 00:13:12.797 fused_ordering(437) 00:13:12.797 fused_ordering(438) 00:13:12.797 fused_ordering(439) 00:13:12.797 fused_ordering(440) 00:13:12.797 fused_ordering(441) 00:13:12.797 fused_ordering(442) 00:13:12.797 fused_ordering(443) 00:13:12.797 fused_ordering(444) 00:13:12.797 fused_ordering(445) 00:13:12.797 fused_ordering(446) 00:13:12.797 fused_ordering(447) 00:13:12.797 fused_ordering(448) 00:13:12.797 fused_ordering(449) 00:13:12.797 fused_ordering(450) 00:13:12.797 fused_ordering(451) 00:13:12.797 fused_ordering(452) 00:13:12.797 fused_ordering(453) 00:13:12.797 fused_ordering(454) 00:13:12.797 fused_ordering(455) 00:13:12.797 fused_ordering(456) 00:13:12.797 fused_ordering(457) 00:13:12.797 fused_ordering(458) 00:13:12.797 fused_ordering(459) 00:13:12.797 fused_ordering(460) 00:13:12.797 fused_ordering(461) 00:13:12.797 fused_ordering(462) 00:13:12.797 fused_ordering(463) 00:13:12.797 fused_ordering(464) 00:13:12.797 fused_ordering(465) 00:13:12.797 fused_ordering(466) 00:13:12.797 fused_ordering(467) 00:13:12.797 fused_ordering(468) 00:13:12.797 fused_ordering(469) 00:13:12.797 fused_ordering(470) 00:13:12.797 fused_ordering(471) 00:13:12.797 fused_ordering(472) 00:13:12.797 fused_ordering(473) 00:13:12.797 fused_ordering(474) 00:13:12.797 fused_ordering(475) 00:13:12.797 fused_ordering(476) 00:13:12.797 fused_ordering(477) 00:13:12.797 fused_ordering(478) 00:13:12.797 fused_ordering(479) 00:13:12.797 fused_ordering(480) 00:13:12.797 fused_ordering(481) 00:13:12.797 fused_ordering(482) 00:13:12.797 fused_ordering(483) 00:13:12.797 fused_ordering(484) 00:13:12.797 fused_ordering(485) 00:13:12.797 fused_ordering(486) 00:13:12.797 fused_ordering(487) 00:13:12.797 fused_ordering(488) 00:13:12.797 fused_ordering(489) 00:13:12.797 fused_ordering(490) 00:13:12.797 fused_ordering(491) 00:13:12.797 fused_ordering(492) 00:13:12.797 fused_ordering(493) 00:13:12.797 fused_ordering(494) 00:13:12.797 fused_ordering(495) 00:13:12.797 fused_ordering(496) 00:13:12.797 fused_ordering(497) 00:13:12.797 fused_ordering(498) 00:13:12.797 fused_ordering(499) 00:13:12.797 fused_ordering(500) 00:13:12.797 fused_ordering(501) 00:13:12.797 fused_ordering(502) 00:13:12.797 fused_ordering(503) 00:13:12.797 fused_ordering(504) 00:13:12.797 fused_ordering(505) 00:13:12.797 fused_ordering(506) 00:13:12.797 fused_ordering(507) 00:13:12.797 fused_ordering(508) 00:13:12.797 fused_ordering(509) 00:13:12.797 fused_ordering(510) 00:13:12.797 fused_ordering(511) 00:13:12.797 fused_ordering(512) 00:13:12.797 fused_ordering(513) 00:13:12.797 fused_ordering(514) 00:13:12.797 fused_ordering(515) 00:13:12.797 fused_ordering(516) 00:13:12.797 fused_ordering(517) 00:13:12.797 fused_ordering(518) 00:13:12.797 fused_ordering(519) 00:13:12.797 fused_ordering(520) 00:13:12.797 fused_ordering(521) 00:13:12.797 fused_ordering(522) 00:13:12.797 fused_ordering(523) 00:13:12.797 fused_ordering(524) 00:13:12.797 fused_ordering(525) 00:13:12.797 fused_ordering(526) 00:13:12.797 fused_ordering(527) 00:13:12.797 fused_ordering(528) 00:13:12.797 fused_ordering(529) 00:13:12.797 fused_ordering(530) 00:13:12.797 fused_ordering(531) 00:13:12.797 fused_ordering(532) 00:13:12.797 fused_ordering(533) 00:13:12.797 fused_ordering(534) 00:13:12.797 fused_ordering(535) 00:13:12.797 fused_ordering(536) 00:13:12.797 fused_ordering(537) 00:13:12.797 fused_ordering(538) 00:13:12.797 fused_ordering(539) 00:13:12.797 fused_ordering(540) 00:13:12.797 fused_ordering(541) 00:13:12.797 fused_ordering(542) 00:13:12.797 fused_ordering(543) 00:13:12.797 fused_ordering(544) 00:13:12.797 fused_ordering(545) 00:13:12.797 fused_ordering(546) 00:13:12.797 fused_ordering(547) 00:13:12.797 fused_ordering(548) 00:13:12.797 fused_ordering(549) 00:13:12.797 fused_ordering(550) 00:13:12.797 fused_ordering(551) 00:13:12.797 fused_ordering(552) 00:13:12.798 fused_ordering(553) 00:13:12.798 fused_ordering(554) 00:13:12.798 fused_ordering(555) 00:13:12.798 fused_ordering(556) 00:13:12.798 fused_ordering(557) 00:13:12.798 fused_ordering(558) 00:13:12.798 fused_ordering(559) 00:13:12.798 fused_ordering(560) 00:13:12.798 fused_ordering(561) 00:13:12.798 fused_ordering(562) 00:13:12.798 fused_ordering(563) 00:13:12.798 fused_ordering(564) 00:13:12.798 fused_ordering(565) 00:13:12.798 fused_ordering(566) 00:13:12.798 fused_ordering(567) 00:13:12.798 fused_ordering(568) 00:13:12.798 fused_ordering(569) 00:13:12.798 fused_ordering(570) 00:13:12.798 fused_ordering(571) 00:13:12.798 fused_ordering(572) 00:13:12.798 fused_ordering(573) 00:13:12.798 fused_ordering(574) 00:13:12.798 fused_ordering(575) 00:13:12.798 fused_ordering(576) 00:13:12.798 fused_ordering(577) 00:13:12.798 fused_ordering(578) 00:13:12.798 fused_ordering(579) 00:13:12.798 fused_ordering(580) 00:13:12.798 fused_ordering(581) 00:13:12.798 fused_ordering(582) 00:13:12.798 fused_ordering(583) 00:13:12.798 fused_ordering(584) 00:13:12.798 fused_ordering(585) 00:13:12.798 fused_ordering(586) 00:13:12.798 fused_ordering(587) 00:13:12.798 fused_ordering(588) 00:13:12.798 fused_ordering(589) 00:13:12.798 fused_ordering(590) 00:13:12.798 fused_ordering(591) 00:13:12.798 fused_ordering(592) 00:13:12.798 fused_ordering(593) 00:13:12.798 fused_ordering(594) 00:13:12.798 fused_ordering(595) 00:13:12.798 fused_ordering(596) 00:13:12.798 fused_ordering(597) 00:13:12.798 fused_ordering(598) 00:13:12.798 fused_ordering(599) 00:13:12.798 fused_ordering(600) 00:13:12.798 fused_ordering(601) 00:13:12.798 fused_ordering(602) 00:13:12.798 fused_ordering(603) 00:13:12.798 fused_ordering(604) 00:13:12.798 fused_ordering(605) 00:13:12.798 fused_ordering(606) 00:13:12.798 fused_ordering(607) 00:13:12.798 fused_ordering(608) 00:13:12.798 fused_ordering(609) 00:13:12.798 fused_ordering(610) 00:13:12.798 fused_ordering(611) 00:13:12.798 fused_ordering(612) 00:13:12.798 fused_ordering(613) 00:13:12.798 fused_ordering(614) 00:13:12.798 fused_ordering(615) 00:13:13.365 fused_ordering(616) 00:13:13.365 fused_ordering(617) 00:13:13.365 fused_ordering(618) 00:13:13.365 fused_ordering(619) 00:13:13.365 fused_ordering(620) 00:13:13.365 fused_ordering(621) 00:13:13.365 fused_ordering(622) 00:13:13.365 fused_ordering(623) 00:13:13.365 fused_ordering(624) 00:13:13.365 fused_ordering(625) 00:13:13.365 fused_ordering(626) 00:13:13.365 fused_ordering(627) 00:13:13.365 fused_ordering(628) 00:13:13.365 fused_ordering(629) 00:13:13.365 fused_ordering(630) 00:13:13.365 fused_ordering(631) 00:13:13.365 fused_ordering(632) 00:13:13.365 fused_ordering(633) 00:13:13.365 fused_ordering(634) 00:13:13.365 fused_ordering(635) 00:13:13.365 fused_ordering(636) 00:13:13.365 fused_ordering(637) 00:13:13.365 fused_ordering(638) 00:13:13.365 fused_ordering(639) 00:13:13.365 fused_ordering(640) 00:13:13.365 fused_ordering(641) 00:13:13.365 fused_ordering(642) 00:13:13.365 fused_ordering(643) 00:13:13.365 fused_ordering(644) 00:13:13.365 fused_ordering(645) 00:13:13.365 fused_ordering(646) 00:13:13.365 fused_ordering(647) 00:13:13.365 fused_ordering(648) 00:13:13.365 fused_ordering(649) 00:13:13.365 fused_ordering(650) 00:13:13.365 fused_ordering(651) 00:13:13.365 fused_ordering(652) 00:13:13.365 fused_ordering(653) 00:13:13.365 fused_ordering(654) 00:13:13.365 fused_ordering(655) 00:13:13.365 fused_ordering(656) 00:13:13.365 fused_ordering(657) 00:13:13.365 fused_ordering(658) 00:13:13.365 fused_ordering(659) 00:13:13.365 fused_ordering(660) 00:13:13.365 fused_ordering(661) 00:13:13.365 fused_ordering(662) 00:13:13.365 fused_ordering(663) 00:13:13.365 fused_ordering(664) 00:13:13.365 fused_ordering(665) 00:13:13.365 fused_ordering(666) 00:13:13.365 fused_ordering(667) 00:13:13.365 fused_ordering(668) 00:13:13.365 fused_ordering(669) 00:13:13.365 fused_ordering(670) 00:13:13.365 fused_ordering(671) 00:13:13.365 fused_ordering(672) 00:13:13.365 fused_ordering(673) 00:13:13.365 fused_ordering(674) 00:13:13.365 fused_ordering(675) 00:13:13.365 fused_ordering(676) 00:13:13.365 fused_ordering(677) 00:13:13.365 fused_ordering(678) 00:13:13.365 fused_ordering(679) 00:13:13.365 fused_ordering(680) 00:13:13.365 fused_ordering(681) 00:13:13.365 fused_ordering(682) 00:13:13.365 fused_ordering(683) 00:13:13.365 fused_ordering(684) 00:13:13.365 fused_ordering(685) 00:13:13.365 fused_ordering(686) 00:13:13.365 fused_ordering(687) 00:13:13.365 fused_ordering(688) 00:13:13.365 fused_ordering(689) 00:13:13.365 fused_ordering(690) 00:13:13.365 fused_ordering(691) 00:13:13.365 fused_ordering(692) 00:13:13.365 fused_ordering(693) 00:13:13.365 fused_ordering(694) 00:13:13.365 fused_ordering(695) 00:13:13.365 fused_ordering(696) 00:13:13.365 fused_ordering(697) 00:13:13.365 fused_ordering(698) 00:13:13.365 fused_ordering(699) 00:13:13.365 fused_ordering(700) 00:13:13.365 fused_ordering(701) 00:13:13.365 fused_ordering(702) 00:13:13.365 fused_ordering(703) 00:13:13.365 fused_ordering(704) 00:13:13.365 fused_ordering(705) 00:13:13.365 fused_ordering(706) 00:13:13.365 fused_ordering(707) 00:13:13.365 fused_ordering(708) 00:13:13.365 fused_ordering(709) 00:13:13.365 fused_ordering(710) 00:13:13.365 fused_ordering(711) 00:13:13.365 fused_ordering(712) 00:13:13.365 fused_ordering(713) 00:13:13.365 fused_ordering(714) 00:13:13.365 fused_ordering(715) 00:13:13.365 fused_ordering(716) 00:13:13.365 fused_ordering(717) 00:13:13.365 fused_ordering(718) 00:13:13.365 fused_ordering(719) 00:13:13.365 fused_ordering(720) 00:13:13.365 fused_ordering(721) 00:13:13.365 fused_ordering(722) 00:13:13.365 fused_ordering(723) 00:13:13.365 fused_ordering(724) 00:13:13.365 fused_ordering(725) 00:13:13.365 fused_ordering(726) 00:13:13.365 fused_ordering(727) 00:13:13.365 fused_ordering(728) 00:13:13.365 fused_ordering(729) 00:13:13.365 fused_ordering(730) 00:13:13.365 fused_ordering(731) 00:13:13.365 fused_ordering(732) 00:13:13.365 fused_ordering(733) 00:13:13.365 fused_ordering(734) 00:13:13.365 fused_ordering(735) 00:13:13.365 fused_ordering(736) 00:13:13.365 fused_ordering(737) 00:13:13.365 fused_ordering(738) 00:13:13.365 fused_ordering(739) 00:13:13.365 fused_ordering(740) 00:13:13.365 fused_ordering(741) 00:13:13.365 fused_ordering(742) 00:13:13.365 fused_ordering(743) 00:13:13.365 fused_ordering(744) 00:13:13.365 fused_ordering(745) 00:13:13.365 fused_ordering(746) 00:13:13.365 fused_ordering(747) 00:13:13.365 fused_ordering(748) 00:13:13.365 fused_ordering(749) 00:13:13.365 fused_ordering(750) 00:13:13.365 fused_ordering(751) 00:13:13.365 fused_ordering(752) 00:13:13.365 fused_ordering(753) 00:13:13.365 fused_ordering(754) 00:13:13.365 fused_ordering(755) 00:13:13.365 fused_ordering(756) 00:13:13.365 fused_ordering(757) 00:13:13.365 fused_ordering(758) 00:13:13.365 fused_ordering(759) 00:13:13.365 fused_ordering(760) 00:13:13.365 fused_ordering(761) 00:13:13.365 fused_ordering(762) 00:13:13.365 fused_ordering(763) 00:13:13.365 fused_ordering(764) 00:13:13.365 fused_ordering(765) 00:13:13.365 fused_ordering(766) 00:13:13.365 fused_ordering(767) 00:13:13.365 fused_ordering(768) 00:13:13.365 fused_ordering(769) 00:13:13.365 fused_ordering(770) 00:13:13.365 fused_ordering(771) 00:13:13.365 fused_ordering(772) 00:13:13.365 fused_ordering(773) 00:13:13.365 fused_ordering(774) 00:13:13.365 fused_ordering(775) 00:13:13.365 fused_ordering(776) 00:13:13.365 fused_ordering(777) 00:13:13.365 fused_ordering(778) 00:13:13.365 fused_ordering(779) 00:13:13.365 fused_ordering(780) 00:13:13.365 fused_ordering(781) 00:13:13.365 fused_ordering(782) 00:13:13.365 fused_ordering(783) 00:13:13.365 fused_ordering(784) 00:13:13.365 fused_ordering(785) 00:13:13.365 fused_ordering(786) 00:13:13.365 fused_ordering(787) 00:13:13.365 fused_ordering(788) 00:13:13.365 fused_ordering(789) 00:13:13.365 fused_ordering(790) 00:13:13.365 fused_ordering(791) 00:13:13.365 fused_ordering(792) 00:13:13.365 fused_ordering(793) 00:13:13.365 fused_ordering(794) 00:13:13.365 fused_ordering(795) 00:13:13.365 fused_ordering(796) 00:13:13.365 fused_ordering(797) 00:13:13.365 fused_ordering(798) 00:13:13.365 fused_ordering(799) 00:13:13.365 fused_ordering(800) 00:13:13.365 fused_ordering(801) 00:13:13.365 fused_ordering(802) 00:13:13.365 fused_ordering(803) 00:13:13.365 fused_ordering(804) 00:13:13.365 fused_ordering(805) 00:13:13.365 fused_ordering(806) 00:13:13.365 fused_ordering(807) 00:13:13.365 fused_ordering(808) 00:13:13.365 fused_ordering(809) 00:13:13.365 fused_ordering(810) 00:13:13.365 fused_ordering(811) 00:13:13.365 fused_ordering(812) 00:13:13.365 fused_ordering(813) 00:13:13.365 fused_ordering(814) 00:13:13.365 fused_ordering(815) 00:13:13.365 fused_ordering(816) 00:13:13.365 fused_ordering(817) 00:13:13.365 fused_ordering(818) 00:13:13.365 fused_ordering(819) 00:13:13.365 fused_ordering(820) 00:13:13.680 fused_ordering(821) 00:13:13.680 fused_ordering(822) 00:13:13.680 fused_ordering(823) 00:13:13.680 fused_ordering(824) 00:13:13.680 fused_ordering(825) 00:13:13.680 fused_ordering(826) 00:13:13.680 fused_ordering(827) 00:13:13.680 fused_ordering(828) 00:13:13.680 fused_ordering(829) 00:13:13.680 fused_ordering(830) 00:13:13.680 fused_ordering(831) 00:13:13.680 fused_ordering(832) 00:13:13.680 fused_ordering(833) 00:13:13.680 fused_ordering(834) 00:13:13.680 fused_ordering(835) 00:13:13.680 fused_ordering(836) 00:13:13.680 fused_ordering(837) 00:13:13.680 fused_ordering(838) 00:13:13.680 fused_ordering(839) 00:13:13.680 fused_ordering(840) 00:13:13.680 fused_ordering(841) 00:13:13.680 fused_ordering(842) 00:13:13.680 fused_ordering(843) 00:13:13.680 fused_ordering(844) 00:13:13.680 fused_ordering(845) 00:13:13.680 fused_ordering(846) 00:13:13.680 fused_ordering(847) 00:13:13.680 fused_ordering(848) 00:13:13.680 fused_ordering(849) 00:13:13.680 fused_ordering(850) 00:13:13.680 fused_ordering(851) 00:13:13.680 fused_ordering(852) 00:13:13.680 fused_ordering(853) 00:13:13.680 fused_ordering(854) 00:13:13.680 fused_ordering(855) 00:13:13.680 fused_ordering(856) 00:13:13.680 fused_ordering(857) 00:13:13.680 fused_ordering(858) 00:13:13.680 fused_ordering(859) 00:13:13.680 fused_ordering(860) 00:13:13.680 fused_ordering(861) 00:13:13.680 fused_ordering(862) 00:13:13.680 fused_ordering(863) 00:13:13.680 fused_ordering(864) 00:13:13.680 fused_ordering(865) 00:13:13.680 fused_ordering(866) 00:13:13.680 fused_ordering(867) 00:13:13.680 fused_ordering(868) 00:13:13.680 fused_ordering(869) 00:13:13.680 fused_ordering(870) 00:13:13.680 fused_ordering(871) 00:13:13.680 fused_ordering(872) 00:13:13.680 fused_ordering(873) 00:13:13.680 fused_ordering(874) 00:13:13.680 fused_ordering(875) 00:13:13.680 fused_ordering(876) 00:13:13.680 fused_ordering(877) 00:13:13.680 fused_ordering(878) 00:13:13.680 fused_ordering(879) 00:13:13.680 fused_ordering(880) 00:13:13.680 fused_ordering(881) 00:13:13.680 fused_ordering(882) 00:13:13.680 fused_ordering(883) 00:13:13.680 fused_ordering(884) 00:13:13.680 fused_ordering(885) 00:13:13.680 fused_ordering(886) 00:13:13.680 fused_ordering(887) 00:13:13.680 fused_ordering(888) 00:13:13.680 fused_ordering(889) 00:13:13.680 fused_ordering(890) 00:13:13.680 fused_ordering(891) 00:13:13.680 fused_ordering(892) 00:13:13.680 fused_ordering(893) 00:13:13.680 fused_ordering(894) 00:13:13.680 fused_ordering(895) 00:13:13.680 fused_ordering(896) 00:13:13.680 fused_ordering(897) 00:13:13.680 fused_ordering(898) 00:13:13.680 fused_ordering(899) 00:13:13.680 fused_ordering(900) 00:13:13.680 fused_ordering(901) 00:13:13.680 fused_ordering(902) 00:13:13.680 fused_ordering(903) 00:13:13.680 fused_ordering(904) 00:13:13.680 fused_ordering(905) 00:13:13.680 fused_ordering(906) 00:13:13.680 fused_ordering(907) 00:13:13.680 fused_ordering(908) 00:13:13.680 fused_ordering(909) 00:13:13.680 fused_ordering(910) 00:13:13.680 fused_ordering(911) 00:13:13.680 fused_ordering(912) 00:13:13.680 fused_ordering(913) 00:13:13.680 fused_ordering(914) 00:13:13.680 fused_ordering(915) 00:13:13.680 fused_ordering(916) 00:13:13.680 fused_ordering(917) 00:13:13.680 fused_ordering(918) 00:13:13.680 fused_ordering(919) 00:13:13.680 fused_ordering(920) 00:13:13.680 fused_ordering(921) 00:13:13.680 fused_ordering(922) 00:13:13.680 fused_ordering(923) 00:13:13.680 fused_ordering(924) 00:13:13.680 fused_ordering(925) 00:13:13.680 fused_ordering(926) 00:13:13.680 fused_ordering(927) 00:13:13.680 fused_ordering(928) 00:13:13.680 fused_ordering(929) 00:13:13.680 fused_ordering(930) 00:13:13.680 fused_ordering(931) 00:13:13.680 fused_ordering(932) 00:13:13.680 fused_ordering(933) 00:13:13.680 fused_ordering(934) 00:13:13.680 fused_ordering(935) 00:13:13.680 fused_ordering(936) 00:13:13.680 fused_ordering(937) 00:13:13.680 fused_ordering(938) 00:13:13.680 fused_ordering(939) 00:13:13.680 fused_ordering(940) 00:13:13.680 fused_ordering(941) 00:13:13.680 fused_ordering(942) 00:13:13.680 fused_ordering(943) 00:13:13.680 fused_ordering(944) 00:13:13.680 fused_ordering(945) 00:13:13.680 fused_ordering(946) 00:13:13.680 fused_ordering(947) 00:13:13.680 fused_ordering(948) 00:13:13.680 fused_ordering(949) 00:13:13.680 fused_ordering(950) 00:13:13.680 fused_ordering(951) 00:13:13.680 fused_ordering(952) 00:13:13.680 fused_ordering(953) 00:13:13.680 fused_ordering(954) 00:13:13.680 fused_ordering(955) 00:13:13.680 fused_ordering(956) 00:13:13.680 fused_ordering(957) 00:13:13.680 fused_ordering(958) 00:13:13.680 fused_ordering(959) 00:13:13.680 fused_ordering(960) 00:13:13.680 fused_ordering(961) 00:13:13.680 fused_ordering(962) 00:13:13.680 fused_ordering(963) 00:13:13.680 fused_ordering(964) 00:13:13.680 fused_ordering(965) 00:13:13.680 fused_ordering(966) 00:13:13.680 fused_ordering(967) 00:13:13.680 fused_ordering(968) 00:13:13.680 fused_ordering(969) 00:13:13.680 fused_ordering(970) 00:13:13.680 fused_ordering(971) 00:13:13.680 fused_ordering(972) 00:13:13.680 fused_ordering(973) 00:13:13.680 fused_ordering(974) 00:13:13.680 fused_ordering(975) 00:13:13.680 fused_ordering(976) 00:13:13.680 fused_ordering(977) 00:13:13.680 fused_ordering(978) 00:13:13.680 fused_ordering(979) 00:13:13.680 fused_ordering(980) 00:13:13.680 fused_ordering(981) 00:13:13.680 fused_ordering(982) 00:13:13.680 fused_ordering(983) 00:13:13.680 fused_ordering(984) 00:13:13.680 fused_ordering(985) 00:13:13.680 fused_ordering(986) 00:13:13.680 fused_ordering(987) 00:13:13.680 fused_ordering(988) 00:13:13.680 fused_ordering(989) 00:13:13.680 fused_ordering(990) 00:13:13.680 fused_ordering(991) 00:13:13.680 fused_ordering(992) 00:13:13.680 fused_ordering(993) 00:13:13.680 fused_ordering(994) 00:13:13.680 fused_ordering(995) 00:13:13.680 fused_ordering(996) 00:13:13.680 fused_ordering(997) 00:13:13.680 fused_ordering(998) 00:13:13.680 fused_ordering(999) 00:13:13.680 fused_ordering(1000) 00:13:13.680 fused_ordering(1001) 00:13:13.680 fused_ordering(1002) 00:13:13.680 fused_ordering(1003) 00:13:13.680 fused_ordering(1004) 00:13:13.680 fused_ordering(1005) 00:13:13.680 fused_ordering(1006) 00:13:13.680 fused_ordering(1007) 00:13:13.680 fused_ordering(1008) 00:13:13.680 fused_ordering(1009) 00:13:13.680 fused_ordering(1010) 00:13:13.680 fused_ordering(1011) 00:13:13.680 fused_ordering(1012) 00:13:13.680 fused_ordering(1013) 00:13:13.680 fused_ordering(1014) 00:13:13.680 fused_ordering(1015) 00:13:13.680 fused_ordering(1016) 00:13:13.680 fused_ordering(1017) 00:13:13.680 fused_ordering(1018) 00:13:13.680 fused_ordering(1019) 00:13:13.680 fused_ordering(1020) 00:13:13.680 fused_ordering(1021) 00:13:13.680 fused_ordering(1022) 00:13:13.680 fused_ordering(1023) 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.681 rmmod nvme_tcp 00:13:13.681 rmmod nvme_fabrics 00:13:13.681 rmmod nvme_keyring 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1665749 ']' 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1665749 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1665749 ']' 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1665749 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.681 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1665749 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1665749' 00:13:13.939 killing process with pid 1665749 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1665749 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1665749 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.939 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.940 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:13.940 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.940 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.940 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.472 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:16.472 00:13:16.472 real 0m11.232s 00:13:16.472 user 0m5.564s 00:13:16.472 sys 0m5.875s 00:13:16.472 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.472 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.472 ************************************ 00:13:16.472 END TEST nvmf_fused_ordering 00:13:16.472 ************************************ 00:13:16.472 11:14:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:16.472 11:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:16.472 11:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.472 11:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:16.472 ************************************ 00:13:16.472 START TEST nvmf_ns_masking 00:13:16.472 ************************************ 00:13:16.472 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:16.472 * Looking for test storage... 00:13:16.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.472 --rc genhtml_branch_coverage=1 00:13:16.472 --rc genhtml_function_coverage=1 00:13:16.472 --rc genhtml_legend=1 00:13:16.472 --rc geninfo_all_blocks=1 00:13:16.472 --rc geninfo_unexecuted_blocks=1 00:13:16.472 00:13:16.472 ' 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.472 --rc genhtml_branch_coverage=1 00:13:16.472 --rc genhtml_function_coverage=1 00:13:16.472 --rc genhtml_legend=1 00:13:16.472 --rc geninfo_all_blocks=1 00:13:16.472 --rc geninfo_unexecuted_blocks=1 00:13:16.472 00:13:16.472 ' 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.472 --rc genhtml_branch_coverage=1 00:13:16.472 --rc genhtml_function_coverage=1 00:13:16.472 --rc genhtml_legend=1 00:13:16.472 --rc geninfo_all_blocks=1 00:13:16.472 --rc geninfo_unexecuted_blocks=1 00:13:16.472 00:13:16.472 ' 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.472 --rc genhtml_branch_coverage=1 00:13:16.472 --rc genhtml_function_coverage=1 00:13:16.472 --rc genhtml_legend=1 00:13:16.472 --rc geninfo_all_blocks=1 00:13:16.472 --rc geninfo_unexecuted_blocks=1 00:13:16.472 00:13:16.472 ' 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.472 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:16.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d1d63c30-193f-45d6-b402-6777548d2049 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0cd7739c-ff9f-4239-9fe3-4d5723c75438 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a38e644e-49cf-43a2-9182-f76b4d9bbf24 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:16.473 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:23.065 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:23.066 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:23.066 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:23.066 Found net devices under 0000:af:00.0: cvl_0_0 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:23.066 Found net devices under 0000:af:00.1: cvl_0_1 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.066 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:23.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:13:23.066 00:13:23.066 --- 10.0.0.2 ping statistics --- 00:13:23.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.066 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:13:23.066 00:13:23.066 --- 10.0.0.1 ping statistics --- 00:13:23.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.066 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1669851 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1669851 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1669851 ']' 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:23.066 [2024-12-06 11:14:55.258836] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:13:23.066 [2024-12-06 11:14:55.258887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.066 [2024-12-06 11:14:55.332842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.066 [2024-12-06 11:14:55.370639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.066 [2024-12-06 11:14:55.370673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.066 [2024-12-06 11:14:55.370683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.066 [2024-12-06 11:14:55.370689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.066 [2024-12-06 11:14:55.370693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.067 [2024-12-06 11:14:55.371227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.326 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.326 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:23.326 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:23.326 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:23.326 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:23.326 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.326 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:23.326 [2024-12-06 11:14:56.256173] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.585 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:23.585 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:23.585 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:23.585 Malloc1 00:13:23.585 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:23.844 Malloc2 00:13:23.844 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.102 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:24.360 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.360 [2024-12-06 11:14:57.236433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.360 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:24.360 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a38e644e-49cf-43a2-9182-f76b4d9bbf24 -a 10.0.0.2 -s 4420 -i 4 00:13:24.618 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.618 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:24.618 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.618 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:24.618 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:26.522 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:26.522 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:26.522 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.522 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:26.522 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.522 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:26.522 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:26.522 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:26.781 [ 0]:0x1 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0467829992eb4530a2d8a587641104be 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0467829992eb4530a2d8a587641104be != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.781 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:27.041 [ 0]:0x1 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0467829992eb4530a2d8a587641104be 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0467829992eb4530a2d8a587641104be != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:27.041 [ 1]:0x2 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2efe3f7f959d4011bceb8b10095d006e 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2efe3f7f959d4011bceb8b10095d006e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.041 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.299 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:27.558 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:27.558 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a38e644e-49cf-43a2-9182-f76b4d9bbf24 -a 10.0.0.2 -s 4420 -i 4 00:13:27.558 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:27.558 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:27.558 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.558 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:27.558 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:27.558 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:30.094 [ 0]:0x2 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2efe3f7f959d4011bceb8b10095d006e 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2efe3f7f959d4011bceb8b10095d006e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:30.094 [ 0]:0x1 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0467829992eb4530a2d8a587641104be 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0467829992eb4530a2d8a587641104be != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:30.094 [ 1]:0x2 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2efe3f7f959d4011bceb8b10095d006e 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2efe3f7f959d4011bceb8b10095d006e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.094 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:30.353 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:30.353 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:30.354 [ 0]:0x2 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2efe3f7f959d4011bceb8b10095d006e 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2efe3f7f959d4011bceb8b10095d006e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:30.354 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.613 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:30.613 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:30.613 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a38e644e-49cf-43a2-9182-f76b4d9bbf24 -a 10.0.0.2 -s 4420 -i 4 00:13:30.871 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:30.871 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:30.871 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.871 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:30.871 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:30.871 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.941 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:33.199 [ 0]:0x1 00:13:33.199 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.199 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.199 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0467829992eb4530a2d8a587641104be 00:13:33.199 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0467829992eb4530a2d8a587641104be != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.199 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:33.199 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.199 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:33.199 [ 1]:0x2 00:13:33.199 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.199 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.199 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2efe3f7f959d4011bceb8b10095d006e 00:13:33.199 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2efe3f7f959d4011bceb8b10095d006e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.199 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:33.457 [ 0]:0x2 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2efe3f7f959d4011bceb8b10095d006e 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2efe3f7f959d4011bceb8b10095d006e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:33.457 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:33.714 [2024-12-06 11:15:06.514505] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:33.714 request: 00:13:33.714 { 00:13:33.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.714 "nsid": 2, 00:13:33.714 "host": "nqn.2016-06.io.spdk:host1", 00:13:33.714 "method": "nvmf_ns_remove_host", 00:13:33.714 "req_id": 1 00:13:33.714 } 00:13:33.714 Got JSON-RPC error response 00:13:33.714 response: 00:13:33.714 { 00:13:33.714 "code": -32602, 00:13:33.714 "message": "Invalid parameters" 00:13:33.714 } 00:13:33.714 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:33.714 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:33.714 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:33.714 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:33.714 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:33.715 [ 0]:0x2 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.715 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2efe3f7f959d4011bceb8b10095d006e 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2efe3f7f959d4011bceb8b10095d006e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1672171 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1672171 /var/tmp/host.sock 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1672171 ']' 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:33.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.973 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.973 [2024-12-06 11:15:06.744235] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:13:33.973 [2024-12-06 11:15:06.744278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1672171 ] 00:13:33.973 [2024-12-06 11:15:06.818978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.973 [2024-12-06 11:15:06.856982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.908 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.908 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:34.908 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.908 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.166 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d1d63c30-193f-45d6-b402-6777548d2049 00:13:35.166 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:35.166 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D1D63C30193F45D6B4026777548D2049 -i 00:13:35.424 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0cd7739c-ff9f-4239-9fe3-4d5723c75438 00:13:35.424 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:35.425 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0CD7739CFF9F42399FE34D5723C75438 -i 00:13:35.425 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:35.682 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:35.940 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:35.940 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:36.197 nvme0n1 00:13:36.197 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:36.197 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:36.454 nvme1n2 00:13:36.454 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:36.454 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:36.455 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:36.455 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:36.455 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:36.713 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:36.713 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:36.713 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:36.713 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:36.972 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d1d63c30-193f-45d6-b402-6777548d2049 == \d\1\d\6\3\c\3\0\-\1\9\3\f\-\4\5\d\6\-\b\4\0\2\-\6\7\7\7\5\4\8\d\2\0\4\9 ]] 00:13:36.972 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:36.972 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:36.972 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:37.230 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0cd7739c-ff9f-4239-9fe3-4d5723c75438 == \0\c\d\7\7\3\9\c\-\f\f\9\f\-\4\2\3\9\-\9\f\e\3\-\4\d\5\7\2\3\c\7\5\4\3\8 ]] 00:13:37.230 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.230 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d1d63c30-193f-45d6-b402-6777548d2049 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D1D63C30193F45D6B4026777548D2049 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D1D63C30193F45D6B4026777548D2049 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:37.488 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D1D63C30193F45D6B4026777548D2049 00:13:37.745 [2024-12-06 11:15:10.485603] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:37.745 [2024-12-06 11:15:10.485633] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:37.745 [2024-12-06 11:15:10.485641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.745 request: 00:13:37.745 { 00:13:37.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.745 "namespace": { 00:13:37.745 "bdev_name": "invalid", 00:13:37.745 "nsid": 1, 00:13:37.745 "nguid": "D1D63C30193F45D6B4026777548D2049", 00:13:37.745 "no_auto_visible": false, 00:13:37.745 "hide_metadata": false 00:13:37.745 }, 00:13:37.745 "method": "nvmf_subsystem_add_ns", 00:13:37.745 "req_id": 1 00:13:37.745 } 00:13:37.745 Got JSON-RPC error response 00:13:37.745 response: 00:13:37.745 { 00:13:37.745 "code": -32602, 00:13:37.745 "message": "Invalid parameters" 00:13:37.745 } 00:13:37.745 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:37.745 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.745 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.745 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.745 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d1d63c30-193f-45d6-b402-6777548d2049 00:13:37.745 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:37.745 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D1D63C30193F45D6B4026777548D2049 -i 00:13:38.002 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:39.898 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:39.898 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:39.898 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1672171 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1672171 ']' 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1672171 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1672171 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1672171' 00:13:40.157 killing process with pid 1672171 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1672171 00:13:40.157 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1672171 00:13:40.415 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.674 rmmod nvme_tcp 00:13:40.674 rmmod nvme_fabrics 00:13:40.674 rmmod nvme_keyring 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1669851 ']' 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1669851 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1669851 ']' 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1669851 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1669851 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1669851' 00:13:40.674 killing process with pid 1669851 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1669851 00:13:40.674 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1669851 00:13:40.932 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.932 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.932 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.932 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:40.932 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:40.933 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.933 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.933 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.933 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:40.933 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.933 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.933 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:43.466 00:13:43.466 real 0m26.868s 00:13:43.466 user 0m32.367s 00:13:43.466 sys 0m7.021s 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.466 ************************************ 00:13:43.466 END TEST nvmf_ns_masking 00:13:43.466 ************************************ 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.466 ************************************ 00:13:43.466 START TEST nvmf_nvme_cli 00:13:43.466 ************************************ 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:43.466 * Looking for test storage... 00:13:43.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:43.466 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:43.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.466 --rc genhtml_branch_coverage=1 00:13:43.466 --rc genhtml_function_coverage=1 00:13:43.466 --rc genhtml_legend=1 00:13:43.466 --rc geninfo_all_blocks=1 00:13:43.466 --rc geninfo_unexecuted_blocks=1 00:13:43.466 00:13:43.466 ' 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:43.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.466 --rc genhtml_branch_coverage=1 00:13:43.466 --rc genhtml_function_coverage=1 00:13:43.466 --rc genhtml_legend=1 00:13:43.466 --rc geninfo_all_blocks=1 00:13:43.466 --rc geninfo_unexecuted_blocks=1 00:13:43.466 00:13:43.466 ' 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:43.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.466 --rc genhtml_branch_coverage=1 00:13:43.466 --rc genhtml_function_coverage=1 00:13:43.466 --rc genhtml_legend=1 00:13:43.466 --rc geninfo_all_blocks=1 00:13:43.466 --rc geninfo_unexecuted_blocks=1 00:13:43.466 00:13:43.466 ' 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:43.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.466 --rc genhtml_branch_coverage=1 00:13:43.466 --rc genhtml_function_coverage=1 00:13:43.466 --rc genhtml_legend=1 00:13:43.466 --rc geninfo_all_blocks=1 00:13:43.466 --rc geninfo_unexecuted_blocks=1 00:13:43.466 00:13:43.466 ' 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.466 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:43.467 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.033 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:50.034 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:50.034 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:50.034 Found net devices under 0000:af:00.0: cvl_0_0 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:50.034 Found net devices under 0000:af:00.1: cvl_0_1 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:50.034 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:50.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:13:50.034 00:13:50.034 --- 10.0.0.2 ping statistics --- 00:13:50.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.034 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:13:50.034 00:13:50.034 --- 10.0.0.1 ping statistics --- 00:13:50.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.034 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1677577 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1677577 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1677577 ']' 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.034 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.034 [2024-12-06 11:15:22.146804] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:13:50.034 [2024-12-06 11:15:22.146843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.034 [2024-12-06 11:15:22.222831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.034 [2024-12-06 11:15:22.261782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.035 [2024-12-06 11:15:22.261819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.035 [2024-12-06 11:15:22.261825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.035 [2024-12-06 11:15:22.261832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.035 [2024-12-06 11:15:22.261836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.035 [2024-12-06 11:15:22.263389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.035 [2024-12-06 11:15:22.263504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.035 [2024-12-06 11:15:22.263594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.035 [2024-12-06 11:15:22.263595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.035 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.035 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:50.035 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:50.035 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:50.035 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.293 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.293 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.293 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.293 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.293 [2024-12-06 11:15:23.001961] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.293 Malloc0 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.293 Malloc1 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.293 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.294 [2024-12-06 11:15:23.089305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.294 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:50.552 00:13:50.552 Discovery Log Number of Records 2, Generation counter 2 00:13:50.552 =====Discovery Log Entry 0====== 00:13:50.552 trtype: tcp 00:13:50.552 adrfam: ipv4 00:13:50.552 subtype: current discovery subsystem 00:13:50.552 treq: not required 00:13:50.552 portid: 0 00:13:50.552 trsvcid: 4420 00:13:50.552 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:50.552 traddr: 10.0.0.2 00:13:50.552 eflags: explicit discovery connections, duplicate discovery information 00:13:50.552 sectype: none 00:13:50.552 =====Discovery Log Entry 1====== 00:13:50.552 trtype: tcp 00:13:50.552 adrfam: ipv4 00:13:50.552 subtype: nvme subsystem 00:13:50.552 treq: not required 00:13:50.552 portid: 0 00:13:50.552 trsvcid: 4420 00:13:50.552 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:50.552 traddr: 10.0.0.2 00:13:50.552 eflags: none 00:13:50.552 sectype: none 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:50.552 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.928 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:51.928 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:51.928 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.928 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:51.928 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:51.928 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:53.828 /dev/nvme0n2 ]] 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:53.828 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:54.088 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:54.348 rmmod nvme_tcp 00:13:54.348 rmmod nvme_fabrics 00:13:54.348 rmmod nvme_keyring 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1677577 ']' 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1677577 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1677577 ']' 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1677577 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.348 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1677577 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1677577' 00:13:54.608 killing process with pid 1677577 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1677577 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1677577 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.608 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:57.140 00:13:57.140 real 0m13.686s 00:13:57.140 user 0m22.773s 00:13:57.140 sys 0m5.135s 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.140 ************************************ 00:13:57.140 END TEST nvmf_nvme_cli 00:13:57.140 ************************************ 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.140 ************************************ 00:13:57.140 START TEST nvmf_vfio_user 00:13:57.140 ************************************ 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:57.140 * Looking for test storage... 00:13:57.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:57.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.140 --rc genhtml_branch_coverage=1 00:13:57.140 --rc genhtml_function_coverage=1 00:13:57.140 --rc genhtml_legend=1 00:13:57.140 --rc geninfo_all_blocks=1 00:13:57.140 --rc geninfo_unexecuted_blocks=1 00:13:57.140 00:13:57.140 ' 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:57.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.140 --rc genhtml_branch_coverage=1 00:13:57.140 --rc genhtml_function_coverage=1 00:13:57.140 --rc genhtml_legend=1 00:13:57.140 --rc geninfo_all_blocks=1 00:13:57.140 --rc geninfo_unexecuted_blocks=1 00:13:57.140 00:13:57.140 ' 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:57.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.140 --rc genhtml_branch_coverage=1 00:13:57.140 --rc genhtml_function_coverage=1 00:13:57.140 --rc genhtml_legend=1 00:13:57.140 --rc geninfo_all_blocks=1 00:13:57.140 --rc geninfo_unexecuted_blocks=1 00:13:57.140 00:13:57.140 ' 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:57.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.140 --rc genhtml_branch_coverage=1 00:13:57.140 --rc genhtml_function_coverage=1 00:13:57.140 --rc genhtml_legend=1 00:13:57.140 --rc geninfo_all_blocks=1 00:13:57.140 --rc geninfo_unexecuted_blocks=1 00:13:57.140 00:13:57.140 ' 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.140 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1679098 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1679098' 00:13:57.141 Process pid: 1679098 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1679098 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1679098 ']' 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.141 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:57.141 [2024-12-06 11:15:29.919513] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:13:57.141 [2024-12-06 11:15:29.919556] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.141 [2024-12-06 11:15:29.974851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.141 [2024-12-06 11:15:30.018331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.141 [2024-12-06 11:15:30.018364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.141 [2024-12-06 11:15:30.018375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.141 [2024-12-06 11:15:30.018381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.141 [2024-12-06 11:15:30.018386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.141 [2024-12-06 11:15:30.022077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.141 [2024-12-06 11:15:30.022112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.141 [2024-12-06 11:15:30.022217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.141 [2024-12-06 11:15:30.022217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.399 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.399 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:57.399 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:58.333 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:58.591 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:58.591 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:58.591 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:58.591 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:58.591 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:58.591 Malloc1 00:13:58.591 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:58.849 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:59.107 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:59.366 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:59.366 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:59.366 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:59.366 Malloc2 00:13:59.366 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:59.625 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:59.884 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:00.145 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:00.145 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:00.145 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:00.145 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:00.145 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:00.145 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:00.145 [2024-12-06 11:15:32.866346] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:14:00.145 [2024-12-06 11:15:32.866390] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1679646 ] 00:14:00.145 [2024-12-06 11:15:32.903339] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:00.145 [2024-12-06 11:15:32.908615] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:00.145 [2024-12-06 11:15:32.908636] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe0b7dc5000 00:14:00.145 [2024-12-06 11:15:32.909616] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:00.145 [2024-12-06 11:15:32.910615] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:00.145 [2024-12-06 11:15:32.911616] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:00.145 [2024-12-06 11:15:32.912623] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:00.146 [2024-12-06 11:15:32.913631] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:00.146 [2024-12-06 11:15:32.914633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:00.146 [2024-12-06 11:15:32.915631] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:00.146 [2024-12-06 11:15:32.916633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:00.146 [2024-12-06 11:15:32.917640] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:00.146 [2024-12-06 11:15:32.917648] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe0b7dba000 00:14:00.146 [2024-12-06 11:15:32.918490] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:00.146 [2024-12-06 11:15:32.927494] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:00.146 [2024-12-06 11:15:32.927517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:00.146 [2024-12-06 11:15:32.932724] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:00.146 [2024-12-06 11:15:32.932758] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:00.146 [2024-12-06 11:15:32.932824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:00.146 [2024-12-06 11:15:32.932838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:00.146 [2024-12-06 11:15:32.932845] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:00.146 [2024-12-06 11:15:32.933722] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:00.146 [2024-12-06 11:15:32.933731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:00.146 [2024-12-06 11:15:32.933736] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:00.146 [2024-12-06 11:15:32.934728] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:00.146 [2024-12-06 11:15:32.934735] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:00.146 [2024-12-06 11:15:32.934741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:00.146 [2024-12-06 11:15:32.935728] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:00.146 [2024-12-06 11:15:32.935735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:00.146 [2024-12-06 11:15:32.936738] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:00.146 [2024-12-06 11:15:32.936745] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:00.146 [2024-12-06 11:15:32.936749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:00.146 [2024-12-06 11:15:32.936754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:00.146 [2024-12-06 11:15:32.936861] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:00.146 [2024-12-06 11:15:32.936865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:00.146 [2024-12-06 11:15:32.936870] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:00.146 [2024-12-06 11:15:32.937746] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:00.146 [2024-12-06 11:15:32.938746] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:00.146 [2024-12-06 11:15:32.939755] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:00.146 [2024-12-06 11:15:32.940750] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:00.146 [2024-12-06 11:15:32.940808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:00.146 [2024-12-06 11:15:32.941763] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:00.146 [2024-12-06 11:15:32.941770] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:00.146 [2024-12-06 11:15:32.941774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.941789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:00.146 [2024-12-06 11:15:32.941798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.941814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:00.146 [2024-12-06 11:15:32.941818] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:00.146 [2024-12-06 11:15:32.941821] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:00.146 [2024-12-06 11:15:32.941833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:00.146 [2024-12-06 11:15:32.941868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:00.146 [2024-12-06 11:15:32.941876] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:00.146 [2024-12-06 11:15:32.941882] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:00.146 [2024-12-06 11:15:32.941885] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:00.146 [2024-12-06 11:15:32.941889] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:00.146 [2024-12-06 11:15:32.941893] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:00.146 [2024-12-06 11:15:32.941897] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:00.146 [2024-12-06 11:15:32.941901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.941908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.941916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:00.146 [2024-12-06 11:15:32.941928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:00.146 [2024-12-06 11:15:32.941937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:00.146 [2024-12-06 11:15:32.941945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:00.146 [2024-12-06 11:15:32.941951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:00.146 [2024-12-06 11:15:32.941958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:00.146 [2024-12-06 11:15:32.941962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.941968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.941976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:00.146 [2024-12-06 11:15:32.941987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:00.146 [2024-12-06 11:15:32.941992] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:00.146 [2024-12-06 11:15:32.941998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.942004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.942009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.942017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:00.146 [2024-12-06 11:15:32.942025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:00.146 [2024-12-06 11:15:32.942075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.942082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.942088] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:00.146 [2024-12-06 11:15:32.942092] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:00.146 [2024-12-06 11:15:32.942095] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:00.146 [2024-12-06 11:15:32.942100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:00.146 [2024-12-06 11:15:32.942111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:00.146 [2024-12-06 11:15:32.942120] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:00.146 [2024-12-06 11:15:32.942130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.942136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:00.146 [2024-12-06 11:15:32.942142] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:00.147 [2024-12-06 11:15:32.942145] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:00.147 [2024-12-06 11:15:32.942148] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:00.147 [2024-12-06 11:15:32.942153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:00.147 [2024-12-06 11:15:32.942169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:00.147 [2024-12-06 11:15:32.942180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:00.147 [2024-12-06 11:15:32.942187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:00.147 [2024-12-06 11:15:32.942192] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:00.147 [2024-12-06 11:15:32.942195] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:00.147 [2024-12-06 11:15:32.942198] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:00.147 [2024-12-06 11:15:32.942203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:00.147 [2024-12-06 11:15:32.942212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:00.147 [2024-12-06 11:15:32.942218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:00.147 [2024-12-06 11:15:32.942223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:00.147 [2024-12-06 11:15:32.942230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:00.147 [2024-12-06 11:15:32.942236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:00.147 [2024-12-06 11:15:32.942241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:00.147 [2024-12-06 11:15:32.942245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:00.147 [2024-12-06 11:15:32.942249] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:00.147 [2024-12-06 11:15:32.942253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:00.147 [2024-12-06 11:15:32.942257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:00.147 [2024-12-06 11:15:32.942272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:00.147 [2024-12-06 11:15:32.942280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:00.147 [2024-12-06 11:15:32.942290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:00.147 [2024-12-06 11:15:32.942299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:00.147 [2024-12-06 11:15:32.942308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:00.147 [2024-12-06 11:15:32.942319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:00.147 [2024-12-06 11:15:32.942328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:00.147 [2024-12-06 11:15:32.942339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:00.147 [2024-12-06 11:15:32.942351] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:00.147 [2024-12-06 11:15:32.942354] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:00.147 [2024-12-06 11:15:32.942357] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:00.147 [2024-12-06 11:15:32.942360] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:00.147 [2024-12-06 11:15:32.942363] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:00.147 [2024-12-06 11:15:32.942368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:00.147 [2024-12-06 11:15:32.942374] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:00.147 [2024-12-06 11:15:32.942377] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:00.147 [2024-12-06 11:15:32.942380] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:00.147 [2024-12-06 11:15:32.942386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:00.147 [2024-12-06 11:15:32.942392] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:00.147 [2024-12-06 11:15:32.942396] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:00.147 [2024-12-06 11:15:32.942398] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:00.147 [2024-12-06 11:15:32.942403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:00.147 [2024-12-06 11:15:32.942409] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:00.147 [2024-12-06 11:15:32.942412] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:00.147 [2024-12-06 11:15:32.942415] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:00.147 [2024-12-06 11:15:32.942420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:00.147 [2024-12-06 11:15:32.942425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:00.147 [2024-12-06 11:15:32.942434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:00.147 [2024-12-06 11:15:32.942443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:00.147 [2024-12-06 11:15:32.942448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:00.147 ===================================================== 00:14:00.147 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:00.147 ===================================================== 00:14:00.147 Controller Capabilities/Features 00:14:00.147 ================================ 00:14:00.147 Vendor ID: 4e58 00:14:00.147 Subsystem Vendor ID: 4e58 00:14:00.147 Serial Number: SPDK1 00:14:00.147 Model Number: SPDK bdev Controller 00:14:00.147 Firmware Version: 25.01 00:14:00.147 Recommended Arb Burst: 6 00:14:00.147 IEEE OUI Identifier: 8d 6b 50 00:14:00.147 Multi-path I/O 00:14:00.147 May have multiple subsystem ports: Yes 00:14:00.147 May have multiple controllers: Yes 00:14:00.147 Associated with SR-IOV VF: No 00:14:00.147 Max Data Transfer Size: 131072 00:14:00.147 Max Number of Namespaces: 32 00:14:00.147 Max Number of I/O Queues: 127 00:14:00.147 NVMe Specification Version (VS): 1.3 00:14:00.147 NVMe Specification Version (Identify): 1.3 00:14:00.147 Maximum Queue Entries: 256 00:14:00.147 Contiguous Queues Required: Yes 00:14:00.147 Arbitration Mechanisms Supported 00:14:00.147 Weighted Round Robin: Not Supported 00:14:00.147 Vendor Specific: Not Supported 00:14:00.147 Reset Timeout: 15000 ms 00:14:00.147 Doorbell Stride: 4 bytes 00:14:00.147 NVM Subsystem Reset: Not Supported 00:14:00.147 Command Sets Supported 00:14:00.147 NVM Command Set: Supported 00:14:00.147 Boot Partition: Not Supported 00:14:00.147 Memory Page Size Minimum: 4096 bytes 00:14:00.147 Memory Page Size Maximum: 4096 bytes 00:14:00.147 Persistent Memory Region: Not Supported 00:14:00.147 Optional Asynchronous Events Supported 00:14:00.147 Namespace Attribute Notices: Supported 00:14:00.147 Firmware Activation Notices: Not Supported 00:14:00.147 ANA Change Notices: Not Supported 00:14:00.147 PLE Aggregate Log Change Notices: Not Supported 00:14:00.147 LBA Status Info Alert Notices: Not Supported 00:14:00.147 EGE Aggregate Log Change Notices: Not Supported 00:14:00.147 Normal NVM Subsystem Shutdown event: Not Supported 00:14:00.147 Zone Descriptor Change Notices: Not Supported 00:14:00.147 Discovery Log Change Notices: Not Supported 00:14:00.147 Controller Attributes 00:14:00.147 128-bit Host Identifier: Supported 00:14:00.147 Non-Operational Permissive Mode: Not Supported 00:14:00.147 NVM Sets: Not Supported 00:14:00.147 Read Recovery Levels: Not Supported 00:14:00.147 Endurance Groups: Not Supported 00:14:00.147 Predictable Latency Mode: Not Supported 00:14:00.147 Traffic Based Keep ALive: Not Supported 00:14:00.147 Namespace Granularity: Not Supported 00:14:00.147 SQ Associations: Not Supported 00:14:00.147 UUID List: Not Supported 00:14:00.147 Multi-Domain Subsystem: Not Supported 00:14:00.147 Fixed Capacity Management: Not Supported 00:14:00.147 Variable Capacity Management: Not Supported 00:14:00.147 Delete Endurance Group: Not Supported 00:14:00.147 Delete NVM Set: Not Supported 00:14:00.147 Extended LBA Formats Supported: Not Supported 00:14:00.147 Flexible Data Placement Supported: Not Supported 00:14:00.147 00:14:00.147 Controller Memory Buffer Support 00:14:00.147 ================================ 00:14:00.147 Supported: No 00:14:00.147 00:14:00.147 Persistent Memory Region Support 00:14:00.147 ================================ 00:14:00.147 Supported: No 00:14:00.147 00:14:00.147 Admin Command Set Attributes 00:14:00.147 ============================ 00:14:00.147 Security Send/Receive: Not Supported 00:14:00.147 Format NVM: Not Supported 00:14:00.148 Firmware Activate/Download: Not Supported 00:14:00.148 Namespace Management: Not Supported 00:14:00.148 Device Self-Test: Not Supported 00:14:00.148 Directives: Not Supported 00:14:00.148 NVMe-MI: Not Supported 00:14:00.148 Virtualization Management: Not Supported 00:14:00.148 Doorbell Buffer Config: Not Supported 00:14:00.148 Get LBA Status Capability: Not Supported 00:14:00.148 Command & Feature Lockdown Capability: Not Supported 00:14:00.148 Abort Command Limit: 4 00:14:00.148 Async Event Request Limit: 4 00:14:00.148 Number of Firmware Slots: N/A 00:14:00.148 Firmware Slot 1 Read-Only: N/A 00:14:00.148 Firmware Activation Without Reset: N/A 00:14:00.148 Multiple Update Detection Support: N/A 00:14:00.148 Firmware Update Granularity: No Information Provided 00:14:00.148 Per-Namespace SMART Log: No 00:14:00.148 Asymmetric Namespace Access Log Page: Not Supported 00:14:00.148 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:00.148 Command Effects Log Page: Supported 00:14:00.148 Get Log Page Extended Data: Supported 00:14:00.148 Telemetry Log Pages: Not Supported 00:14:00.148 Persistent Event Log Pages: Not Supported 00:14:00.148 Supported Log Pages Log Page: May Support 00:14:00.148 Commands Supported & Effects Log Page: Not Supported 00:14:00.148 Feature Identifiers & Effects Log Page:May Support 00:14:00.148 NVMe-MI Commands & Effects Log Page: May Support 00:14:00.148 Data Area 4 for Telemetry Log: Not Supported 00:14:00.148 Error Log Page Entries Supported: 128 00:14:00.148 Keep Alive: Supported 00:14:00.148 Keep Alive Granularity: 10000 ms 00:14:00.148 00:14:00.148 NVM Command Set Attributes 00:14:00.148 ========================== 00:14:00.148 Submission Queue Entry Size 00:14:00.148 Max: 64 00:14:00.148 Min: 64 00:14:00.148 Completion Queue Entry Size 00:14:00.148 Max: 16 00:14:00.148 Min: 16 00:14:00.148 Number of Namespaces: 32 00:14:00.148 Compare Command: Supported 00:14:00.148 Write Uncorrectable Command: Not Supported 00:14:00.148 Dataset Management Command: Supported 00:14:00.148 Write Zeroes Command: Supported 00:14:00.148 Set Features Save Field: Not Supported 00:14:00.148 Reservations: Not Supported 00:14:00.148 Timestamp: Not Supported 00:14:00.148 Copy: Supported 00:14:00.148 Volatile Write Cache: Present 00:14:00.148 Atomic Write Unit (Normal): 1 00:14:00.148 Atomic Write Unit (PFail): 1 00:14:00.148 Atomic Compare & Write Unit: 1 00:14:00.148 Fused Compare & Write: Supported 00:14:00.148 Scatter-Gather List 00:14:00.148 SGL Command Set: Supported (Dword aligned) 00:14:00.148 SGL Keyed: Not Supported 00:14:00.148 SGL Bit Bucket Descriptor: Not Supported 00:14:00.148 SGL Metadata Pointer: Not Supported 00:14:00.148 Oversized SGL: Not Supported 00:14:00.148 SGL Metadata Address: Not Supported 00:14:00.148 SGL Offset: Not Supported 00:14:00.148 Transport SGL Data Block: Not Supported 00:14:00.148 Replay Protected Memory Block: Not Supported 00:14:00.148 00:14:00.148 Firmware Slot Information 00:14:00.148 ========================= 00:14:00.148 Active slot: 1 00:14:00.148 Slot 1 Firmware Revision: 25.01 00:14:00.148 00:14:00.148 00:14:00.148 Commands Supported and Effects 00:14:00.148 ============================== 00:14:00.148 Admin Commands 00:14:00.148 -------------- 00:14:00.148 Get Log Page (02h): Supported 00:14:00.148 Identify (06h): Supported 00:14:00.148 Abort (08h): Supported 00:14:00.148 Set Features (09h): Supported 00:14:00.148 Get Features (0Ah): Supported 00:14:00.148 Asynchronous Event Request (0Ch): Supported 00:14:00.148 Keep Alive (18h): Supported 00:14:00.148 I/O Commands 00:14:00.148 ------------ 00:14:00.148 Flush (00h): Supported LBA-Change 00:14:00.148 Write (01h): Supported LBA-Change 00:14:00.148 Read (02h): Supported 00:14:00.148 Compare (05h): Supported 00:14:00.148 Write Zeroes (08h): Supported LBA-Change 00:14:00.148 Dataset Management (09h): Supported LBA-Change 00:14:00.148 Copy (19h): Supported LBA-Change 00:14:00.148 00:14:00.148 Error Log 00:14:00.148 ========= 00:14:00.148 00:14:00.148 Arbitration 00:14:00.148 =========== 00:14:00.148 Arbitration Burst: 1 00:14:00.148 00:14:00.148 Power Management 00:14:00.148 ================ 00:14:00.148 Number of Power States: 1 00:14:00.148 Current Power State: Power State #0 00:14:00.148 Power State #0: 00:14:00.148 Max Power: 0.00 W 00:14:00.148 Non-Operational State: Operational 00:14:00.148 Entry Latency: Not Reported 00:14:00.148 Exit Latency: Not Reported 00:14:00.148 Relative Read Throughput: 0 00:14:00.148 Relative Read Latency: 0 00:14:00.148 Relative Write Throughput: 0 00:14:00.148 Relative Write Latency: 0 00:14:00.148 Idle Power: Not Reported 00:14:00.148 Active Power: Not Reported 00:14:00.148 Non-Operational Permissive Mode: Not Supported 00:14:00.148 00:14:00.148 Health Information 00:14:00.148 ================== 00:14:00.148 Critical Warnings: 00:14:00.148 Available Spare Space: OK 00:14:00.148 Temperature: OK 00:14:00.148 Device Reliability: OK 00:14:00.148 Read Only: No 00:14:00.148 Volatile Memory Backup: OK 00:14:00.148 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:00.148 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:00.148 Available Spare: 0% 00:14:00.148 Available Sp[2024-12-06 11:15:32.942521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:00.148 [2024-12-06 11:15:32.942530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:00.148 [2024-12-06 11:15:32.942553] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:00.148 [2024-12-06 11:15:32.942561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:00.148 [2024-12-06 11:15:32.942566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:00.148 [2024-12-06 11:15:32.942571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:00.148 [2024-12-06 11:15:32.942576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:00.148 [2024-12-06 11:15:32.942765] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:00.148 [2024-12-06 11:15:32.942775] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:00.148 [2024-12-06 11:15:32.943767] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:00.148 [2024-12-06 11:15:32.943813] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:00.148 [2024-12-06 11:15:32.943819] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:00.148 [2024-12-06 11:15:32.944775] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:00.148 [2024-12-06 11:15:32.944784] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:00.148 [2024-12-06 11:15:32.944831] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:00.148 [2024-12-06 11:15:32.947064] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:00.148 are Threshold: 0% 00:14:00.148 Life Percentage Used: 0% 00:14:00.148 Data Units Read: 0 00:14:00.148 Data Units Written: 0 00:14:00.148 Host Read Commands: 0 00:14:00.148 Host Write Commands: 0 00:14:00.148 Controller Busy Time: 0 minutes 00:14:00.148 Power Cycles: 0 00:14:00.148 Power On Hours: 0 hours 00:14:00.148 Unsafe Shutdowns: 0 00:14:00.148 Unrecoverable Media Errors: 0 00:14:00.148 Lifetime Error Log Entries: 0 00:14:00.148 Warning Temperature Time: 0 minutes 00:14:00.148 Critical Temperature Time: 0 minutes 00:14:00.148 00:14:00.148 Number of Queues 00:14:00.148 ================ 00:14:00.148 Number of I/O Submission Queues: 127 00:14:00.148 Number of I/O Completion Queues: 127 00:14:00.148 00:14:00.148 Active Namespaces 00:14:00.148 ================= 00:14:00.148 Namespace ID:1 00:14:00.148 Error Recovery Timeout: Unlimited 00:14:00.148 Command Set Identifier: NVM (00h) 00:14:00.148 Deallocate: Supported 00:14:00.148 Deallocated/Unwritten Error: Not Supported 00:14:00.148 Deallocated Read Value: Unknown 00:14:00.148 Deallocate in Write Zeroes: Not Supported 00:14:00.148 Deallocated Guard Field: 0xFFFF 00:14:00.148 Flush: Supported 00:14:00.148 Reservation: Supported 00:14:00.148 Namespace Sharing Capabilities: Multiple Controllers 00:14:00.148 Size (in LBAs): 131072 (0GiB) 00:14:00.148 Capacity (in LBAs): 131072 (0GiB) 00:14:00.148 Utilization (in LBAs): 131072 (0GiB) 00:14:00.148 NGUID: 68CEF7883DE24B3DBD2722B290C9D757 00:14:00.148 UUID: 68cef788-3de2-4b3d-bd27-22b290c9d757 00:14:00.148 Thin Provisioning: Not Supported 00:14:00.148 Per-NS Atomic Units: Yes 00:14:00.148 Atomic Boundary Size (Normal): 0 00:14:00.148 Atomic Boundary Size (PFail): 0 00:14:00.148 Atomic Boundary Offset: 0 00:14:00.149 Maximum Single Source Range Length: 65535 00:14:00.149 Maximum Copy Length: 65535 00:14:00.149 Maximum Source Range Count: 1 00:14:00.149 NGUID/EUI64 Never Reused: No 00:14:00.149 Namespace Write Protected: No 00:14:00.149 Number of LBA Formats: 1 00:14:00.149 Current LBA Format: LBA Format #00 00:14:00.149 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:00.149 00:14:00.149 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:00.409 [2024-12-06 11:15:33.164863] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.684 Initializing NVMe Controllers 00:14:05.684 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:05.684 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:05.684 Initialization complete. Launching workers. 00:14:05.684 ======================================================== 00:14:05.684 Latency(us) 00:14:05.684 Device Information : IOPS MiB/s Average min max 00:14:05.684 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39981.96 156.18 3201.74 885.37 8714.73 00:14:05.684 ======================================================== 00:14:05.684 Total : 39981.96 156.18 3201.74 885.37 8714.73 00:14:05.684 00:14:05.684 [2024-12-06 11:15:38.186508] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.684 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:05.684 [2024-12-06 11:15:38.405451] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:10.954 Initializing NVMe Controllers 00:14:10.954 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:10.954 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:10.954 Initialization complete. Launching workers. 00:14:10.954 ======================================================== 00:14:10.954 Latency(us) 00:14:10.954 Device Information : IOPS MiB/s Average min max 00:14:10.954 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15999.89 62.50 8005.49 7790.86 15963.76 00:14:10.954 ======================================================== 00:14:10.954 Total : 15999.89 62.50 8005.49 7790.86 15963.76 00:14:10.954 00:14:10.954 [2024-12-06 11:15:43.447577] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:10.954 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:10.954 [2024-12-06 11:15:43.645465] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:16.234 [2024-12-06 11:15:48.727381] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:16.234 Initializing NVMe Controllers 00:14:16.234 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:16.234 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:16.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:16.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:16.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:16.234 Initialization complete. Launching workers. 00:14:16.234 Starting thread on core 2 00:14:16.234 Starting thread on core 3 00:14:16.234 Starting thread on core 1 00:14:16.234 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:16.234 [2024-12-06 11:15:49.004416] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:20.419 [2024-12-06 11:15:52.816260] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:20.419 Initializing NVMe Controllers 00:14:20.419 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:20.419 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:20.419 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:20.419 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:20.419 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:20.419 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:20.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:20.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:20.419 Initialization complete. Launching workers. 00:14:20.419 Starting thread on core 1 with urgent priority queue 00:14:20.419 Starting thread on core 2 with urgent priority queue 00:14:20.419 Starting thread on core 3 with urgent priority queue 00:14:20.419 Starting thread on core 0 with urgent priority queue 00:14:20.419 SPDK bdev Controller (SPDK1 ) core 0: 2372.33 IO/s 42.15 secs/100000 ios 00:14:20.419 SPDK bdev Controller (SPDK1 ) core 1: 2294.00 IO/s 43.59 secs/100000 ios 00:14:20.419 SPDK bdev Controller (SPDK1 ) core 2: 2018.67 IO/s 49.54 secs/100000 ios 00:14:20.419 SPDK bdev Controller (SPDK1 ) core 3: 2886.67 IO/s 34.64 secs/100000 ios 00:14:20.419 ======================================================== 00:14:20.419 00:14:20.419 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:20.419 [2024-12-06 11:15:53.085788] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:20.419 Initializing NVMe Controllers 00:14:20.419 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:20.419 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:20.419 Namespace ID: 1 size: 0GB 00:14:20.419 Initialization complete. 00:14:20.419 INFO: using host memory buffer for IO 00:14:20.419 Hello world! 00:14:20.419 [2024-12-06 11:15:53.120004] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:20.419 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:20.678 [2024-12-06 11:15:53.386468] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:21.613 Initializing NVMe Controllers 00:14:21.614 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:21.614 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:21.614 Initialization complete. Launching workers. 00:14:21.614 submit (in ns) avg, min, max = 5532.5, 2940.9, 5991630.9 00:14:21.614 complete (in ns) avg, min, max = 20930.7, 1612.7, 3999400.0 00:14:21.614 00:14:21.614 Submit histogram 00:14:21.614 ================ 00:14:21.614 Range in us Cumulative Count 00:14:21.614 2.938 - 2.953: 0.0680% ( 12) 00:14:21.614 2.953 - 2.967: 1.0487% ( 173) 00:14:21.614 2.967 - 2.982: 3.9170% ( 506) 00:14:21.614 2.982 - 2.996: 8.1175% ( 741) 00:14:21.614 2.996 - 3.011: 14.2112% ( 1075) 00:14:21.614 3.011 - 3.025: 20.0442% ( 1029) 00:14:21.614 3.025 - 3.040: 26.0076% ( 1052) 00:14:21.614 3.040 - 3.055: 30.1457% ( 730) 00:14:21.614 3.055 - 3.069: 33.1160% ( 524) 00:14:21.614 3.069 - 3.084: 35.5762% ( 434) 00:14:21.614 3.084 - 3.098: 38.4729% ( 511) 00:14:21.614 3.098 - 3.113: 40.6836% ( 390) 00:14:21.614 3.113 - 3.127: 43.7447% ( 540) 00:14:21.614 3.127 - 3.142: 46.6073% ( 505) 00:14:21.614 3.142 - 3.156: 52.5367% ( 1046) 00:14:21.614 3.156 - 3.171: 59.4637% ( 1222) 00:14:21.614 3.171 - 3.185: 65.7616% ( 1111) 00:14:21.614 3.185 - 3.200: 71.1411% ( 949) 00:14:21.614 3.200 - 3.215: 76.3959% ( 927) 00:14:21.614 3.215 - 3.229: 80.9081% ( 796) 00:14:21.614 3.229 - 3.244: 84.0258% ( 550) 00:14:21.614 3.244 - 3.258: 86.0949% ( 365) 00:14:21.614 3.258 - 3.273: 87.0926% ( 176) 00:14:21.614 3.273 - 3.287: 87.8635% ( 136) 00:14:21.614 3.287 - 3.302: 88.5551% ( 122) 00:14:21.614 3.302 - 3.316: 89.2977% ( 131) 00:14:21.614 3.316 - 3.331: 90.0573% ( 134) 00:14:21.614 3.331 - 3.345: 90.8282% ( 136) 00:14:21.614 3.345 - 3.360: 91.5198% ( 122) 00:14:21.614 3.360 - 3.375: 92.0639% ( 96) 00:14:21.614 3.375 - 3.389: 92.6818% ( 109) 00:14:21.614 3.389 - 3.404: 93.2317% ( 97) 00:14:21.614 3.404 - 3.418: 93.9006% ( 118) 00:14:21.614 3.418 - 3.433: 94.6942% ( 140) 00:14:21.614 3.433 - 3.447: 95.5955% ( 159) 00:14:21.614 3.447 - 3.462: 96.5138% ( 162) 00:14:21.614 3.462 - 3.476: 97.2734% ( 134) 00:14:21.614 3.476 - 3.491: 97.8062% ( 94) 00:14:21.614 3.491 - 3.505: 98.3051% ( 88) 00:14:21.614 3.505 - 3.520: 98.7019% ( 70) 00:14:21.614 3.520 - 3.535: 99.0193% ( 56) 00:14:21.614 3.535 - 3.549: 99.2291% ( 37) 00:14:21.614 3.549 - 3.564: 99.3594% ( 23) 00:14:21.614 3.564 - 3.578: 99.5125% ( 27) 00:14:21.614 3.578 - 3.593: 99.6372% ( 22) 00:14:21.614 3.593 - 3.607: 99.6826% ( 8) 00:14:21.614 3.607 - 3.622: 99.7109% ( 5) 00:14:21.614 3.622 - 3.636: 99.7279% ( 3) 00:14:21.614 3.636 - 3.651: 99.7336% ( 1) 00:14:21.614 3.782 - 3.811: 99.7392% ( 1) 00:14:21.614 4.858 - 4.887: 99.7449% ( 1) 00:14:21.614 4.945 - 4.975: 99.7562% ( 2) 00:14:21.614 5.004 - 5.033: 99.7676% ( 2) 00:14:21.614 5.062 - 5.091: 99.7733% ( 1) 00:14:21.614 5.149 - 5.178: 99.7846% ( 2) 00:14:21.614 5.178 - 5.207: 99.7903% ( 1) 00:14:21.614 5.207 - 5.236: 99.7959% ( 1) 00:14:21.614 5.236 - 5.265: 99.8016% ( 1) 00:14:21.614 5.411 - 5.440: 99.8073% ( 1) 00:14:21.614 5.527 - 5.556: 99.8129% ( 1) 00:14:21.614 5.615 - 5.644: 99.8186% ( 1) 00:14:21.614 5.731 - 5.760: 99.8243% ( 1) 00:14:21.614 5.818 - 5.847: 99.8299% ( 1) 00:14:21.614 5.847 - 5.876: 99.8356% ( 1) 00:14:21.614 5.905 - 5.935: 99.8413% ( 1) 00:14:21.614 6.255 - 6.284: 99.8469% ( 1) 00:14:21.614 6.371 - 6.400: 99.8526% ( 1) 00:14:21.614 6.400 - 6.429: 99.8640% ( 2) 00:14:21.614 6.691 - 6.720: 99.8696% ( 1) 00:14:21.614 6.720 - 6.749: 99.8753% ( 1) 00:14:21.614 6.924 - 6.953: 99.8810% ( 1) 00:14:21.614 7.156 - 7.185: 99.8866% ( 1) 00:14:21.614 7.505 - 7.564: 99.8923% ( 1) 00:14:21.614 7.680 - 7.738: 99.8980% ( 1) 00:14:21.614 7.796 - 7.855: 99.9036% ( 1) 00:14:21.614 7.855 - 7.913: 99.9206% ( 3) 00:14:21.614 8.029 - 8.087: 99.9263% ( 1) 00:14:21.614 8.436 - 8.495: 99.9320% ( 1) 00:14:21.614 8.669 - 8.727: 99.9376% ( 1) 00:14:21.614 [2024-12-06 11:15:54.409343] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:21.614 8.844 - 8.902: 99.9433% ( 1) 00:14:21.614 3991.738 - 4021.527: 99.9943% ( 9) 00:14:21.614 5987.607 - 6017.396: 100.0000% ( 1) 00:14:21.614 00:14:21.614 Complete histogram 00:14:21.614 ================== 00:14:21.614 Range in us Cumulative Count 00:14:21.614 1.607 - 1.615: 0.0113% ( 2) 00:14:21.614 1.615 - 1.622: 0.2438% ( 41) 00:14:21.614 1.622 - 1.629: 1.6609% ( 250) 00:14:21.614 1.629 - 1.636: 3.7243% ( 364) 00:14:21.614 1.636 - 1.644: 5.1811% ( 257) 00:14:21.614 1.644 - 1.651: 5.9350% ( 133) 00:14:21.614 1.651 - 1.658: 6.4112% ( 84) 00:14:21.614 1.658 - 1.665: 6.8704% ( 81) 00:14:21.614 1.665 - 1.673: 13.1455% ( 1107) 00:14:21.614 1.673 - 1.680: 38.2461% ( 4428) 00:14:21.614 1.680 - 1.687: 69.2251% ( 5465) 00:14:21.614 1.687 - 1.695: 84.9385% ( 2772) 00:14:21.614 1.695 - 1.702: 90.6525% ( 1008) 00:14:21.614 1.702 - 1.709: 93.2544% ( 459) 00:14:21.614 1.709 - 1.716: 94.6658% ( 249) 00:14:21.614 1.716 - 1.724: 95.2837% ( 109) 00:14:21.614 1.724 - 1.731: 95.5898% ( 54) 00:14:21.614 1.731 - 1.738: 95.8676% ( 49) 00:14:21.614 1.738 - 1.745: 96.4628% ( 105) 00:14:21.614 1.745 - 1.753: 97.3244% ( 152) 00:14:21.614 1.753 - 1.760: 98.1804% ( 151) 00:14:21.614 1.760 - 1.767: 98.8153% ( 112) 00:14:21.614 1.767 - 1.775: 99.0760% ( 46) 00:14:21.614 1.775 - 1.782: 99.1610% ( 15) 00:14:21.614 1.782 - 1.789: 99.2121% ( 9) 00:14:21.614 1.789 - 1.796: 99.2404% ( 5) 00:14:21.614 1.796 - 1.804: 99.2461% ( 1) 00:14:21.614 1.804 - 1.811: 99.2574% ( 2) 00:14:21.614 1.818 - 1.825: 99.2687% ( 2) 00:14:21.614 1.833 - 1.840: 99.2744% ( 1) 00:14:21.614 1.840 - 1.847: 99.2801% ( 1) 00:14:21.614 1.862 - 1.876: 99.2858% ( 1) 00:14:21.614 1.876 - 1.891: 99.2914% ( 1) 00:14:21.614 1.905 - 1.920: 99.2971% ( 1) 00:14:21.614 1.920 - 1.935: 99.3084% ( 2) 00:14:21.614 3.142 - 3.156: 99.3141% ( 1) 00:14:21.614 3.418 - 3.433: 99.3198% ( 1) 00:14:21.614 3.520 - 3.535: 99.3254% ( 1) 00:14:21.614 3.549 - 3.564: 99.3311% ( 1) 00:14:21.614 3.564 - 3.578: 99.3368% ( 1) 00:14:21.614 3.593 - 3.607: 99.3424% ( 1) 00:14:21.614 3.622 - 3.636: 99.3481% ( 1) 00:14:21.614 3.724 - 3.753: 99.3538% ( 1) 00:14:21.614 3.840 - 3.869: 99.3594% ( 1) 00:14:21.614 3.985 - 4.015: 99.3708% ( 2) 00:14:21.614 4.015 - 4.044: 99.3765% ( 1) 00:14:21.614 4.131 - 4.160: 99.3821% ( 1) 00:14:21.614 4.335 - 4.364: 99.3878% ( 1) 00:14:21.614 4.451 - 4.480: 99.3935% ( 1) 00:14:21.614 4.567 - 4.596: 99.3991% ( 1) 00:14:21.614 4.625 - 4.655: 99.4048% ( 1) 00:14:21.614 4.713 - 4.742: 99.4105% ( 1) 00:14:21.614 4.742 - 4.771: 99.4161% ( 1) 00:14:21.614 4.771 - 4.800: 99.4218% ( 1) 00:14:21.614 4.858 - 4.887: 99.4275% ( 1) 00:14:21.614 5.033 - 5.062: 99.4331% ( 1) 00:14:21.614 5.178 - 5.207: 99.4388% ( 1) 00:14:21.614 5.207 - 5.236: 99.4445% ( 1) 00:14:21.614 5.498 - 5.527: 99.4501% ( 1) 00:14:21.614 5.731 - 5.760: 99.4558% ( 1) 00:14:21.614 5.818 - 5.847: 99.4615% ( 1) 00:14:21.614 5.993 - 6.022: 99.4672% ( 1) 00:14:21.614 6.284 - 6.313: 99.4728% ( 1) 00:14:21.614 6.662 - 6.691: 99.4785% ( 1) 00:14:21.614 6.778 - 6.807: 99.4842% ( 1) 00:14:21.614 7.185 - 7.215: 99.4898% ( 1) 00:14:21.614 7.215 - 7.244: 99.4955% ( 1) 00:14:21.614 7.505 - 7.564: 99.5012% ( 1) 00:14:21.614 11.404 - 11.462: 99.5068% ( 1) 00:14:21.614 14.895 - 15.011: 99.5125% ( 1) 00:14:21.614 1995.869 - 2010.764: 99.5182% ( 1) 00:14:21.614 3038.487 - 3053.382: 99.5238% ( 1) 00:14:21.614 3187.433 - 3202.327: 99.5295% ( 1) 00:14:21.614 3619.375 - 3634.269: 99.5352% ( 1) 00:14:21.614 3991.738 - 4021.527: 100.0000% ( 82) 00:14:21.614 00:14:21.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:21.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:21.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:21.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:21.615 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:21.874 [ 00:14:21.874 { 00:14:21.874 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:21.874 "subtype": "Discovery", 00:14:21.874 "listen_addresses": [], 00:14:21.874 "allow_any_host": true, 00:14:21.874 "hosts": [] 00:14:21.874 }, 00:14:21.874 { 00:14:21.874 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:21.874 "subtype": "NVMe", 00:14:21.874 "listen_addresses": [ 00:14:21.874 { 00:14:21.874 "trtype": "VFIOUSER", 00:14:21.874 "adrfam": "IPv4", 00:14:21.874 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:21.874 "trsvcid": "0" 00:14:21.874 } 00:14:21.874 ], 00:14:21.874 "allow_any_host": true, 00:14:21.874 "hosts": [], 00:14:21.874 "serial_number": "SPDK1", 00:14:21.874 "model_number": "SPDK bdev Controller", 00:14:21.874 "max_namespaces": 32, 00:14:21.874 "min_cntlid": 1, 00:14:21.874 "max_cntlid": 65519, 00:14:21.874 "namespaces": [ 00:14:21.874 { 00:14:21.874 "nsid": 1, 00:14:21.874 "bdev_name": "Malloc1", 00:14:21.874 "name": "Malloc1", 00:14:21.874 "nguid": "68CEF7883DE24B3DBD2722B290C9D757", 00:14:21.874 "uuid": "68cef788-3de2-4b3d-bd27-22b290c9d757" 00:14:21.874 } 00:14:21.874 ] 00:14:21.874 }, 00:14:21.874 { 00:14:21.874 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:21.874 "subtype": "NVMe", 00:14:21.874 "listen_addresses": [ 00:14:21.874 { 00:14:21.874 "trtype": "VFIOUSER", 00:14:21.874 "adrfam": "IPv4", 00:14:21.874 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:21.874 "trsvcid": "0" 00:14:21.874 } 00:14:21.874 ], 00:14:21.874 "allow_any_host": true, 00:14:21.874 "hosts": [], 00:14:21.874 "serial_number": "SPDK2", 00:14:21.874 "model_number": "SPDK bdev Controller", 00:14:21.874 "max_namespaces": 32, 00:14:21.874 "min_cntlid": 1, 00:14:21.874 "max_cntlid": 65519, 00:14:21.874 "namespaces": [ 00:14:21.874 { 00:14:21.874 "nsid": 1, 00:14:21.874 "bdev_name": "Malloc2", 00:14:21.874 "name": "Malloc2", 00:14:21.874 "nguid": "A15F2B45A9C94180B96B52F88D8FCC76", 00:14:21.874 "uuid": "a15f2b45-a9c9-4180-b96b-52f88d8fcc76" 00:14:21.874 } 00:14:21.874 ] 00:14:21.874 } 00:14:21.874 ] 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1683575 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:21.874 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:21.874 [2024-12-06 11:15:54.796085] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:22.135 Malloc3 00:14:22.135 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:22.135 [2024-12-06 11:15:55.015696] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:22.135 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:22.135 Asynchronous Event Request test 00:14:22.135 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:22.135 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:22.135 Registering asynchronous event callbacks... 00:14:22.135 Starting namespace attribute notice tests for all controllers... 00:14:22.135 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:22.135 aer_cb - Changed Namespace 00:14:22.135 Cleaning up... 00:14:22.395 [ 00:14:22.395 { 00:14:22.395 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:22.395 "subtype": "Discovery", 00:14:22.395 "listen_addresses": [], 00:14:22.395 "allow_any_host": true, 00:14:22.395 "hosts": [] 00:14:22.395 }, 00:14:22.395 { 00:14:22.395 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:22.395 "subtype": "NVMe", 00:14:22.395 "listen_addresses": [ 00:14:22.395 { 00:14:22.395 "trtype": "VFIOUSER", 00:14:22.395 "adrfam": "IPv4", 00:14:22.395 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:22.395 "trsvcid": "0" 00:14:22.395 } 00:14:22.395 ], 00:14:22.395 "allow_any_host": true, 00:14:22.395 "hosts": [], 00:14:22.395 "serial_number": "SPDK1", 00:14:22.395 "model_number": "SPDK bdev Controller", 00:14:22.395 "max_namespaces": 32, 00:14:22.395 "min_cntlid": 1, 00:14:22.395 "max_cntlid": 65519, 00:14:22.395 "namespaces": [ 00:14:22.395 { 00:14:22.395 "nsid": 1, 00:14:22.395 "bdev_name": "Malloc1", 00:14:22.395 "name": "Malloc1", 00:14:22.395 "nguid": "68CEF7883DE24B3DBD2722B290C9D757", 00:14:22.395 "uuid": "68cef788-3de2-4b3d-bd27-22b290c9d757" 00:14:22.395 }, 00:14:22.395 { 00:14:22.395 "nsid": 2, 00:14:22.395 "bdev_name": "Malloc3", 00:14:22.395 "name": "Malloc3", 00:14:22.395 "nguid": "2AF0B7FAE52C40B3BF6FE26626764355", 00:14:22.395 "uuid": "2af0b7fa-e52c-40b3-bf6f-e26626764355" 00:14:22.395 } 00:14:22.395 ] 00:14:22.395 }, 00:14:22.395 { 00:14:22.395 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:22.395 "subtype": "NVMe", 00:14:22.395 "listen_addresses": [ 00:14:22.395 { 00:14:22.395 "trtype": "VFIOUSER", 00:14:22.395 "adrfam": "IPv4", 00:14:22.395 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:22.395 "trsvcid": "0" 00:14:22.395 } 00:14:22.395 ], 00:14:22.395 "allow_any_host": true, 00:14:22.395 "hosts": [], 00:14:22.395 "serial_number": "SPDK2", 00:14:22.395 "model_number": "SPDK bdev Controller", 00:14:22.395 "max_namespaces": 32, 00:14:22.395 "min_cntlid": 1, 00:14:22.395 "max_cntlid": 65519, 00:14:22.395 "namespaces": [ 00:14:22.395 { 00:14:22.395 "nsid": 1, 00:14:22.395 "bdev_name": "Malloc2", 00:14:22.395 "name": "Malloc2", 00:14:22.395 "nguid": "A15F2B45A9C94180B96B52F88D8FCC76", 00:14:22.395 "uuid": "a15f2b45-a9c9-4180-b96b-52f88d8fcc76" 00:14:22.395 } 00:14:22.395 ] 00:14:22.395 } 00:14:22.395 ] 00:14:22.395 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1683575 00:14:22.395 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:22.395 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:22.396 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:22.396 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:22.396 [2024-12-06 11:15:55.248236] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:14:22.396 [2024-12-06 11:15:55.248277] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683596 ] 00:14:22.396 [2024-12-06 11:15:55.284310] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:22.396 [2024-12-06 11:15:55.289554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:22.396 [2024-12-06 11:15:55.289576] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f108ddfe000 00:14:22.396 [2024-12-06 11:15:55.290553] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:22.396 [2024-12-06 11:15:55.291561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:22.396 [2024-12-06 11:15:55.292569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:22.396 [2024-12-06 11:15:55.293578] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:22.396 [2024-12-06 11:15:55.294582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:22.396 [2024-12-06 11:15:55.295593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:22.396 [2024-12-06 11:15:55.296602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:22.396 [2024-12-06 11:15:55.297608] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:22.396 [2024-12-06 11:15:55.298614] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:22.396 [2024-12-06 11:15:55.298623] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f108ddf3000 00:14:22.396 [2024-12-06 11:15:55.299464] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:22.396 [2024-12-06 11:15:55.312389] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:22.396 [2024-12-06 11:15:55.312414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:22.396 [2024-12-06 11:15:55.314477] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:22.396 [2024-12-06 11:15:55.314510] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:22.396 [2024-12-06 11:15:55.314577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:22.396 [2024-12-06 11:15:55.314588] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:22.396 [2024-12-06 11:15:55.314593] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:22.396 [2024-12-06 11:15:55.315482] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:22.396 [2024-12-06 11:15:55.315490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:22.396 [2024-12-06 11:15:55.315499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:22.396 [2024-12-06 11:15:55.316486] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:22.396 [2024-12-06 11:15:55.316494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:22.396 [2024-12-06 11:15:55.316500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:22.396 [2024-12-06 11:15:55.317492] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:22.396 [2024-12-06 11:15:55.317499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:22.396 [2024-12-06 11:15:55.318497] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:22.396 [2024-12-06 11:15:55.318505] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:22.396 [2024-12-06 11:15:55.318509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:22.396 [2024-12-06 11:15:55.318515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:22.396 [2024-12-06 11:15:55.318621] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:22.396 [2024-12-06 11:15:55.318626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:22.396 [2024-12-06 11:15:55.318630] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:22.396 [2024-12-06 11:15:55.319503] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:22.396 [2024-12-06 11:15:55.320511] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:22.396 [2024-12-06 11:15:55.321523] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:22.396 [2024-12-06 11:15:55.322521] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:22.396 [2024-12-06 11:15:55.322558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:22.396 [2024-12-06 11:15:55.323531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:22.396 [2024-12-06 11:15:55.323538] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:22.396 [2024-12-06 11:15:55.323543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:22.396 [2024-12-06 11:15:55.323558] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:22.396 [2024-12-06 11:15:55.323564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:22.396 [2024-12-06 11:15:55.323578] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:22.396 [2024-12-06 11:15:55.323583] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:22.396 [2024-12-06 11:15:55.323587] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:22.396 [2024-12-06 11:15:55.323597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:22.396 [2024-12-06 11:15:55.330067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:22.396 [2024-12-06 11:15:55.330079] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:22.396 [2024-12-06 11:15:55.330085] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:22.396 [2024-12-06 11:15:55.330089] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:22.396 [2024-12-06 11:15:55.330093] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:22.396 [2024-12-06 11:15:55.330097] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:22.396 [2024-12-06 11:15:55.330100] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:22.396 [2024-12-06 11:15:55.330104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:22.396 [2024-12-06 11:15:55.330110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:22.396 [2024-12-06 11:15:55.330119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:22.656 [2024-12-06 11:15:55.338064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:22.656 [2024-12-06 11:15:55.338077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.657 [2024-12-06 11:15:55.338084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.657 [2024-12-06 11:15:55.338091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.657 [2024-12-06 11:15:55.338098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.657 [2024-12-06 11:15:55.338102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.338110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.338117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.346062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.346069] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:22.657 [2024-12-06 11:15:55.346073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.346079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.346086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.346093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.354067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.354120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.354127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.354134] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:22.657 [2024-12-06 11:15:55.354138] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:22.657 [2024-12-06 11:15:55.354140] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:22.657 [2024-12-06 11:15:55.354146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.362066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.362081] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:22.657 [2024-12-06 11:15:55.362090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.362096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.362102] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:22.657 [2024-12-06 11:15:55.362105] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:22.657 [2024-12-06 11:15:55.362108] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:22.657 [2024-12-06 11:15:55.362114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.370063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.370076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.370082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.370089] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:22.657 [2024-12-06 11:15:55.370092] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:22.657 [2024-12-06 11:15:55.370095] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:22.657 [2024-12-06 11:15:55.370100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.378062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.378070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.378079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.378086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.378092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.378097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.378101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.378105] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:22.657 [2024-12-06 11:15:55.378109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:22.657 [2024-12-06 11:15:55.378113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:22.657 [2024-12-06 11:15:55.378128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.386063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.386074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.394063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.394074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.402063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.402074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.410062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.410075] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:22.657 [2024-12-06 11:15:55.410080] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:22.657 [2024-12-06 11:15:55.410083] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:22.657 [2024-12-06 11:15:55.410085] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:22.657 [2024-12-06 11:15:55.410088] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:22.657 [2024-12-06 11:15:55.410093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:22.657 [2024-12-06 11:15:55.410099] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:22.657 [2024-12-06 11:15:55.410103] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:22.657 [2024-12-06 11:15:55.410106] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:22.657 [2024-12-06 11:15:55.410110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.410116] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:22.657 [2024-12-06 11:15:55.410121] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:22.657 [2024-12-06 11:15:55.410124] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:22.657 [2024-12-06 11:15:55.410129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.410135] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:22.657 [2024-12-06 11:15:55.410138] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:22.657 [2024-12-06 11:15:55.410141] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:22.657 [2024-12-06 11:15:55.410145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:22.657 [2024-12-06 11:15:55.418064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.418076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.418085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:22.657 [2024-12-06 11:15:55.418091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:22.657 ===================================================== 00:14:22.657 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.657 ===================================================== 00:14:22.657 Controller Capabilities/Features 00:14:22.657 ================================ 00:14:22.657 Vendor ID: 4e58 00:14:22.657 Subsystem Vendor ID: 4e58 00:14:22.657 Serial Number: SPDK2 00:14:22.657 Model Number: SPDK bdev Controller 00:14:22.657 Firmware Version: 25.01 00:14:22.657 Recommended Arb Burst: 6 00:14:22.657 IEEE OUI Identifier: 8d 6b 50 00:14:22.657 Multi-path I/O 00:14:22.657 May have multiple subsystem ports: Yes 00:14:22.658 May have multiple controllers: Yes 00:14:22.658 Associated with SR-IOV VF: No 00:14:22.658 Max Data Transfer Size: 131072 00:14:22.658 Max Number of Namespaces: 32 00:14:22.658 Max Number of I/O Queues: 127 00:14:22.658 NVMe Specification Version (VS): 1.3 00:14:22.658 NVMe Specification Version (Identify): 1.3 00:14:22.658 Maximum Queue Entries: 256 00:14:22.658 Contiguous Queues Required: Yes 00:14:22.658 Arbitration Mechanisms Supported 00:14:22.658 Weighted Round Robin: Not Supported 00:14:22.658 Vendor Specific: Not Supported 00:14:22.658 Reset Timeout: 15000 ms 00:14:22.658 Doorbell Stride: 4 bytes 00:14:22.658 NVM Subsystem Reset: Not Supported 00:14:22.658 Command Sets Supported 00:14:22.658 NVM Command Set: Supported 00:14:22.658 Boot Partition: Not Supported 00:14:22.658 Memory Page Size Minimum: 4096 bytes 00:14:22.658 Memory Page Size Maximum: 4096 bytes 00:14:22.658 Persistent Memory Region: Not Supported 00:14:22.658 Optional Asynchronous Events Supported 00:14:22.658 Namespace Attribute Notices: Supported 00:14:22.658 Firmware Activation Notices: Not Supported 00:14:22.658 ANA Change Notices: Not Supported 00:14:22.658 PLE Aggregate Log Change Notices: Not Supported 00:14:22.658 LBA Status Info Alert Notices: Not Supported 00:14:22.658 EGE Aggregate Log Change Notices: Not Supported 00:14:22.658 Normal NVM Subsystem Shutdown event: Not Supported 00:14:22.658 Zone Descriptor Change Notices: Not Supported 00:14:22.658 Discovery Log Change Notices: Not Supported 00:14:22.658 Controller Attributes 00:14:22.658 128-bit Host Identifier: Supported 00:14:22.658 Non-Operational Permissive Mode: Not Supported 00:14:22.658 NVM Sets: Not Supported 00:14:22.658 Read Recovery Levels: Not Supported 00:14:22.658 Endurance Groups: Not Supported 00:14:22.658 Predictable Latency Mode: Not Supported 00:14:22.658 Traffic Based Keep ALive: Not Supported 00:14:22.658 Namespace Granularity: Not Supported 00:14:22.658 SQ Associations: Not Supported 00:14:22.658 UUID List: Not Supported 00:14:22.658 Multi-Domain Subsystem: Not Supported 00:14:22.658 Fixed Capacity Management: Not Supported 00:14:22.658 Variable Capacity Management: Not Supported 00:14:22.658 Delete Endurance Group: Not Supported 00:14:22.658 Delete NVM Set: Not Supported 00:14:22.658 Extended LBA Formats Supported: Not Supported 00:14:22.658 Flexible Data Placement Supported: Not Supported 00:14:22.658 00:14:22.658 Controller Memory Buffer Support 00:14:22.658 ================================ 00:14:22.658 Supported: No 00:14:22.658 00:14:22.658 Persistent Memory Region Support 00:14:22.658 ================================ 00:14:22.658 Supported: No 00:14:22.658 00:14:22.658 Admin Command Set Attributes 00:14:22.658 ============================ 00:14:22.658 Security Send/Receive: Not Supported 00:14:22.658 Format NVM: Not Supported 00:14:22.658 Firmware Activate/Download: Not Supported 00:14:22.658 Namespace Management: Not Supported 00:14:22.658 Device Self-Test: Not Supported 00:14:22.658 Directives: Not Supported 00:14:22.658 NVMe-MI: Not Supported 00:14:22.658 Virtualization Management: Not Supported 00:14:22.658 Doorbell Buffer Config: Not Supported 00:14:22.658 Get LBA Status Capability: Not Supported 00:14:22.658 Command & Feature Lockdown Capability: Not Supported 00:14:22.658 Abort Command Limit: 4 00:14:22.658 Async Event Request Limit: 4 00:14:22.658 Number of Firmware Slots: N/A 00:14:22.658 Firmware Slot 1 Read-Only: N/A 00:14:22.658 Firmware Activation Without Reset: N/A 00:14:22.658 Multiple Update Detection Support: N/A 00:14:22.658 Firmware Update Granularity: No Information Provided 00:14:22.658 Per-Namespace SMART Log: No 00:14:22.658 Asymmetric Namespace Access Log Page: Not Supported 00:14:22.658 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:22.658 Command Effects Log Page: Supported 00:14:22.658 Get Log Page Extended Data: Supported 00:14:22.658 Telemetry Log Pages: Not Supported 00:14:22.658 Persistent Event Log Pages: Not Supported 00:14:22.658 Supported Log Pages Log Page: May Support 00:14:22.658 Commands Supported & Effects Log Page: Not Supported 00:14:22.658 Feature Identifiers & Effects Log Page:May Support 00:14:22.658 NVMe-MI Commands & Effects Log Page: May Support 00:14:22.658 Data Area 4 for Telemetry Log: Not Supported 00:14:22.658 Error Log Page Entries Supported: 128 00:14:22.658 Keep Alive: Supported 00:14:22.658 Keep Alive Granularity: 10000 ms 00:14:22.658 00:14:22.658 NVM Command Set Attributes 00:14:22.658 ========================== 00:14:22.658 Submission Queue Entry Size 00:14:22.658 Max: 64 00:14:22.658 Min: 64 00:14:22.658 Completion Queue Entry Size 00:14:22.658 Max: 16 00:14:22.658 Min: 16 00:14:22.658 Number of Namespaces: 32 00:14:22.658 Compare Command: Supported 00:14:22.658 Write Uncorrectable Command: Not Supported 00:14:22.658 Dataset Management Command: Supported 00:14:22.658 Write Zeroes Command: Supported 00:14:22.658 Set Features Save Field: Not Supported 00:14:22.658 Reservations: Not Supported 00:14:22.658 Timestamp: Not Supported 00:14:22.658 Copy: Supported 00:14:22.658 Volatile Write Cache: Present 00:14:22.658 Atomic Write Unit (Normal): 1 00:14:22.658 Atomic Write Unit (PFail): 1 00:14:22.658 Atomic Compare & Write Unit: 1 00:14:22.658 Fused Compare & Write: Supported 00:14:22.658 Scatter-Gather List 00:14:22.658 SGL Command Set: Supported (Dword aligned) 00:14:22.658 SGL Keyed: Not Supported 00:14:22.658 SGL Bit Bucket Descriptor: Not Supported 00:14:22.658 SGL Metadata Pointer: Not Supported 00:14:22.658 Oversized SGL: Not Supported 00:14:22.658 SGL Metadata Address: Not Supported 00:14:22.658 SGL Offset: Not Supported 00:14:22.658 Transport SGL Data Block: Not Supported 00:14:22.658 Replay Protected Memory Block: Not Supported 00:14:22.658 00:14:22.658 Firmware Slot Information 00:14:22.658 ========================= 00:14:22.658 Active slot: 1 00:14:22.658 Slot 1 Firmware Revision: 25.01 00:14:22.658 00:14:22.658 00:14:22.658 Commands Supported and Effects 00:14:22.658 ============================== 00:14:22.658 Admin Commands 00:14:22.658 -------------- 00:14:22.658 Get Log Page (02h): Supported 00:14:22.658 Identify (06h): Supported 00:14:22.658 Abort (08h): Supported 00:14:22.658 Set Features (09h): Supported 00:14:22.658 Get Features (0Ah): Supported 00:14:22.658 Asynchronous Event Request (0Ch): Supported 00:14:22.658 Keep Alive (18h): Supported 00:14:22.658 I/O Commands 00:14:22.658 ------------ 00:14:22.658 Flush (00h): Supported LBA-Change 00:14:22.658 Write (01h): Supported LBA-Change 00:14:22.658 Read (02h): Supported 00:14:22.658 Compare (05h): Supported 00:14:22.658 Write Zeroes (08h): Supported LBA-Change 00:14:22.658 Dataset Management (09h): Supported LBA-Change 00:14:22.658 Copy (19h): Supported LBA-Change 00:14:22.658 00:14:22.658 Error Log 00:14:22.658 ========= 00:14:22.658 00:14:22.658 Arbitration 00:14:22.658 =========== 00:14:22.658 Arbitration Burst: 1 00:14:22.658 00:14:22.658 Power Management 00:14:22.658 ================ 00:14:22.658 Number of Power States: 1 00:14:22.658 Current Power State: Power State #0 00:14:22.658 Power State #0: 00:14:22.658 Max Power: 0.00 W 00:14:22.658 Non-Operational State: Operational 00:14:22.658 Entry Latency: Not Reported 00:14:22.658 Exit Latency: Not Reported 00:14:22.658 Relative Read Throughput: 0 00:14:22.658 Relative Read Latency: 0 00:14:22.658 Relative Write Throughput: 0 00:14:22.658 Relative Write Latency: 0 00:14:22.658 Idle Power: Not Reported 00:14:22.658 Active Power: Not Reported 00:14:22.658 Non-Operational Permissive Mode: Not Supported 00:14:22.658 00:14:22.658 Health Information 00:14:22.658 ================== 00:14:22.658 Critical Warnings: 00:14:22.658 Available Spare Space: OK 00:14:22.658 Temperature: OK 00:14:22.658 Device Reliability: OK 00:14:22.658 Read Only: No 00:14:22.658 Volatile Memory Backup: OK 00:14:22.658 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:22.658 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:22.658 Available Spare: 0% 00:14:22.658 Available Sp[2024-12-06 11:15:55.418170] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:22.658 [2024-12-06 11:15:55.426063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:22.658 [2024-12-06 11:15:55.426089] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:22.658 [2024-12-06 11:15:55.426098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.658 [2024-12-06 11:15:55.426103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.659 [2024-12-06 11:15:55.426108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.659 [2024-12-06 11:15:55.426113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.659 [2024-12-06 11:15:55.426160] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:22.659 [2024-12-06 11:15:55.426169] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:22.659 [2024-12-06 11:15:55.427167] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:22.659 [2024-12-06 11:15:55.427208] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:22.659 [2024-12-06 11:15:55.427213] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:22.659 [2024-12-06 11:15:55.428173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:22.659 [2024-12-06 11:15:55.428183] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:22.659 [2024-12-06 11:15:55.428228] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:22.659 [2024-12-06 11:15:55.431064] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:22.659 are Threshold: 0% 00:14:22.659 Life Percentage Used: 0% 00:14:22.659 Data Units Read: 0 00:14:22.659 Data Units Written: 0 00:14:22.659 Host Read Commands: 0 00:14:22.659 Host Write Commands: 0 00:14:22.659 Controller Busy Time: 0 minutes 00:14:22.659 Power Cycles: 0 00:14:22.659 Power On Hours: 0 hours 00:14:22.659 Unsafe Shutdowns: 0 00:14:22.659 Unrecoverable Media Errors: 0 00:14:22.659 Lifetime Error Log Entries: 0 00:14:22.659 Warning Temperature Time: 0 minutes 00:14:22.659 Critical Temperature Time: 0 minutes 00:14:22.659 00:14:22.659 Number of Queues 00:14:22.659 ================ 00:14:22.659 Number of I/O Submission Queues: 127 00:14:22.659 Number of I/O Completion Queues: 127 00:14:22.659 00:14:22.659 Active Namespaces 00:14:22.659 ================= 00:14:22.659 Namespace ID:1 00:14:22.659 Error Recovery Timeout: Unlimited 00:14:22.659 Command Set Identifier: NVM (00h) 00:14:22.659 Deallocate: Supported 00:14:22.659 Deallocated/Unwritten Error: Not Supported 00:14:22.659 Deallocated Read Value: Unknown 00:14:22.659 Deallocate in Write Zeroes: Not Supported 00:14:22.659 Deallocated Guard Field: 0xFFFF 00:14:22.659 Flush: Supported 00:14:22.659 Reservation: Supported 00:14:22.659 Namespace Sharing Capabilities: Multiple Controllers 00:14:22.659 Size (in LBAs): 131072 (0GiB) 00:14:22.659 Capacity (in LBAs): 131072 (0GiB) 00:14:22.659 Utilization (in LBAs): 131072 (0GiB) 00:14:22.659 NGUID: A15F2B45A9C94180B96B52F88D8FCC76 00:14:22.659 UUID: a15f2b45-a9c9-4180-b96b-52f88d8fcc76 00:14:22.659 Thin Provisioning: Not Supported 00:14:22.659 Per-NS Atomic Units: Yes 00:14:22.659 Atomic Boundary Size (Normal): 0 00:14:22.659 Atomic Boundary Size (PFail): 0 00:14:22.659 Atomic Boundary Offset: 0 00:14:22.659 Maximum Single Source Range Length: 65535 00:14:22.659 Maximum Copy Length: 65535 00:14:22.659 Maximum Source Range Count: 1 00:14:22.659 NGUID/EUI64 Never Reused: No 00:14:22.659 Namespace Write Protected: No 00:14:22.659 Number of LBA Formats: 1 00:14:22.659 Current LBA Format: LBA Format #00 00:14:22.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:22.659 00:14:22.659 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:22.919 [2024-12-06 11:15:55.648716] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:28.191 Initializing NVMe Controllers 00:14:28.191 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:28.191 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:28.191 Initialization complete. Launching workers. 00:14:28.191 ======================================================== 00:14:28.191 Latency(us) 00:14:28.192 Device Information : IOPS MiB/s Average min max 00:14:28.192 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39966.71 156.12 3202.46 898.41 7864.64 00:14:28.192 ======================================================== 00:14:28.192 Total : 39966.71 156.12 3202.46 898.41 7864.64 00:14:28.192 00:14:28.192 [2024-12-06 11:16:00.753326] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:28.192 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:28.192 [2024-12-06 11:16:00.974952] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:33.574 Initializing NVMe Controllers 00:14:33.574 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:33.574 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:33.574 Initialization complete. Launching workers. 00:14:33.574 ======================================================== 00:14:33.574 Latency(us) 00:14:33.574 Device Information : IOPS MiB/s Average min max 00:14:33.574 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39991.60 156.22 3200.88 893.41 6757.35 00:14:33.574 ======================================================== 00:14:33.574 Total : 39991.60 156.22 3200.88 893.41 6757.35 00:14:33.574 00:14:33.574 [2024-12-06 11:16:05.995475] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:33.574 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:33.574 [2024-12-06 11:16:06.198175] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:38.839 [2024-12-06 11:16:11.324153] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:38.839 Initializing NVMe Controllers 00:14:38.839 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:38.839 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:38.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:38.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:38.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:38.839 Initialization complete. Launching workers. 00:14:38.839 Starting thread on core 2 00:14:38.839 Starting thread on core 3 00:14:38.839 Starting thread on core 1 00:14:38.839 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:38.839 [2024-12-06 11:16:11.608441] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.033 [2024-12-06 11:16:15.481255] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.033 Initializing NVMe Controllers 00:14:43.033 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:43.033 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:43.033 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:43.033 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:43.033 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:43.033 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:43.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:43.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:43.033 Initialization complete. Launching workers. 00:14:43.033 Starting thread on core 1 with urgent priority queue 00:14:43.033 Starting thread on core 2 with urgent priority queue 00:14:43.033 Starting thread on core 3 with urgent priority queue 00:14:43.033 Starting thread on core 0 with urgent priority queue 00:14:43.033 SPDK bdev Controller (SPDK2 ) core 0: 801.33 IO/s 124.79 secs/100000 ios 00:14:43.033 SPDK bdev Controller (SPDK2 ) core 1: 730.67 IO/s 136.86 secs/100000 ios 00:14:43.033 SPDK bdev Controller (SPDK2 ) core 2: 1033.00 IO/s 96.81 secs/100000 ios 00:14:43.033 SPDK bdev Controller (SPDK2 ) core 3: 654.67 IO/s 152.75 secs/100000 ios 00:14:43.033 ======================================================== 00:14:43.033 00:14:43.033 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:43.033 [2024-12-06 11:16:15.749362] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.033 Initializing NVMe Controllers 00:14:43.033 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:43.033 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:43.033 Namespace ID: 1 size: 0GB 00:14:43.033 Initialization complete. 00:14:43.033 INFO: using host memory buffer for IO 00:14:43.033 Hello world! 00:14:43.033 [2024-12-06 11:16:15.763456] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.033 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:43.293 [2024-12-06 11:16:16.026118] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:44.233 Initializing NVMe Controllers 00:14:44.233 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:44.233 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:44.233 Initialization complete. Launching workers. 00:14:44.233 submit (in ns) avg, min, max = 6499.0, 2893.6, 3999371.8 00:14:44.233 complete (in ns) avg, min, max = 21238.6, 1585.5, 5990826.4 00:14:44.233 00:14:44.233 Submit histogram 00:14:44.233 ================ 00:14:44.233 Range in us Cumulative Count 00:14:44.233 2.880 - 2.895: 0.0056% ( 1) 00:14:44.233 2.895 - 2.909: 0.0503% ( 8) 00:14:44.233 2.909 - 2.924: 0.1005% ( 9) 00:14:44.233 2.924 - 2.938: 0.2178% ( 21) 00:14:44.233 2.938 - 2.953: 0.7205% ( 90) 00:14:44.233 2.953 - 2.967: 2.7534% ( 364) 00:14:44.233 2.967 - 2.982: 6.4116% ( 655) 00:14:44.233 2.982 - 2.996: 10.7121% ( 770) 00:14:44.233 2.996 - 3.011: 15.8671% ( 923) 00:14:44.233 3.011 - 3.025: 22.3625% ( 1163) 00:14:44.233 3.025 - 3.040: 27.6180% ( 941) 00:14:44.233 3.040 - 3.055: 31.3376% ( 666) 00:14:44.233 3.055 - 3.069: 34.3089% ( 532) 00:14:44.233 3.069 - 3.084: 37.8833% ( 640) 00:14:44.233 3.084 - 3.098: 40.2346% ( 421) 00:14:44.233 3.098 - 3.113: 42.7199% ( 445) 00:14:44.233 3.113 - 3.127: 44.8757% ( 386) 00:14:44.233 3.127 - 3.142: 48.3887% ( 629) 00:14:44.233 3.142 - 3.156: 54.8506% ( 1157) 00:14:44.233 3.156 - 3.171: 61.1673% ( 1131) 00:14:44.233 3.171 - 3.185: 67.1656% ( 1074) 00:14:44.233 3.185 - 3.200: 71.7286% ( 817) 00:14:44.233 3.200 - 3.215: 76.7774% ( 904) 00:14:44.233 3.215 - 3.229: 81.0612% ( 767) 00:14:44.233 3.229 - 3.244: 84.0882% ( 542) 00:14:44.233 3.244 - 3.258: 85.8140% ( 309) 00:14:44.233 3.258 - 3.273: 87.0260% ( 217) 00:14:44.233 3.273 - 3.287: 88.1095% ( 194) 00:14:44.233 3.287 - 3.302: 88.9193% ( 145) 00:14:44.233 3.302 - 3.316: 89.7235% ( 144) 00:14:44.233 3.316 - 3.331: 90.5390% ( 146) 00:14:44.233 3.331 - 3.345: 91.3376% ( 143) 00:14:44.233 3.345 - 3.360: 91.9631% ( 112) 00:14:44.233 3.360 - 3.375: 92.5272% ( 101) 00:14:44.233 3.375 - 3.389: 93.1025% ( 103) 00:14:44.233 3.389 - 3.404: 93.6107% ( 91) 00:14:44.233 3.404 - 3.418: 94.2251% ( 110) 00:14:44.233 3.418 - 3.433: 95.0572% ( 149) 00:14:44.233 3.433 - 3.447: 95.7610% ( 126) 00:14:44.233 3.447 - 3.462: 96.4926% ( 131) 00:14:44.233 3.462 - 3.476: 97.1572% ( 119) 00:14:44.233 3.476 - 3.491: 97.7157% ( 100) 00:14:44.233 3.491 - 3.505: 98.2016% ( 87) 00:14:44.233 3.505 - 3.520: 98.5926% ( 70) 00:14:44.233 3.520 - 3.535: 98.9388% ( 62) 00:14:44.233 3.535 - 3.549: 99.1455% ( 37) 00:14:44.233 3.549 - 3.564: 99.3354% ( 34) 00:14:44.233 3.564 - 3.578: 99.4527% ( 21) 00:14:44.233 3.578 - 3.593: 99.5588% ( 19) 00:14:44.233 3.593 - 3.607: 99.6090% ( 9) 00:14:44.233 3.607 - 3.622: 99.6593% ( 9) 00:14:44.233 3.622 - 3.636: 99.6705% ( 2) 00:14:44.233 3.636 - 3.651: 99.6761% ( 1) 00:14:44.233 3.651 - 3.665: 99.6817% ( 1) 00:14:44.233 3.680 - 3.695: 99.6872% ( 1) 00:14:44.233 4.858 - 4.887: 99.6928% ( 1) 00:14:44.233 4.887 - 4.916: 99.6984% ( 1) 00:14:44.233 5.033 - 5.062: 99.7040% ( 1) 00:14:44.233 5.091 - 5.120: 99.7096% ( 1) 00:14:44.233 5.120 - 5.149: 99.7152% ( 1) 00:14:44.233 5.178 - 5.207: 99.7207% ( 1) 00:14:44.233 5.236 - 5.265: 99.7263% ( 1) 00:14:44.233 5.265 - 5.295: 99.7319% ( 1) 00:14:44.233 5.295 - 5.324: 99.7375% ( 1) 00:14:44.233 5.324 - 5.353: 99.7487% ( 2) 00:14:44.233 5.440 - 5.469: 99.7598% ( 2) 00:14:44.233 5.789 - 5.818: 99.7654% ( 1) 00:14:44.233 6.167 - 6.196: 99.7710% ( 1) 00:14:44.233 6.284 - 6.313: 99.7766% ( 1) 00:14:44.233 6.429 - 6.458: 99.7822% ( 1) 00:14:44.233 6.458 - 6.487: 99.7878% ( 1) 00:14:44.233 6.749 - 6.778: 99.7934% ( 1) 00:14:44.233 6.836 - 6.865: 99.7989% ( 1) 00:14:44.233 7.011 - 7.040: 99.8045% ( 1) 00:14:44.233 7.069 - 7.098: 99.8157% ( 2) 00:14:44.233 7.185 - 7.215: 99.8213% ( 1) 00:14:44.233 7.273 - 7.302: 99.8324% ( 2) 00:14:44.233 7.360 - 7.389: 99.8380% ( 1) 00:14:44.233 7.447 - 7.505: 99.8436% ( 1) 00:14:44.233 7.564 - 7.622: 99.8492% ( 1) 00:14:44.233 7.680 - 7.738: 99.8548% ( 1) 00:14:44.233 7.855 - 7.913: 99.8660% ( 2) 00:14:44.233 7.913 - 7.971: 99.8715% ( 1) 00:14:44.233 8.029 - 8.087: 99.8771% ( 1) 00:14:44.233 8.087 - 8.145: 99.8827% ( 1) 00:14:44.233 8.145 - 8.204: 99.8883% ( 1) 00:14:44.233 8.204 - 8.262: 99.8939% ( 1) 00:14:44.233 8.320 - 8.378: 99.8995% ( 1) 00:14:44.233 12.044 - 12.102: 99.9051% ( 1) 00:14:44.233 13.556 - 13.615: 99.9106% ( 1) 00:14:44.233 13.615 - 13.673: 99.9162% ( 1) 00:14:44.233 3991.738 - 4021.527: 100.0000% ( 15) 00:14:44.233 00:14:44.233 Complete histogram 00:14:44.233 ================== 00:14:44.233 Range in us Cumulative Count 00:14:44.233 1.585 - 1.593: 0.0391% ( 7) 00:14:44.233 1.593 - 1.600: 0.1508% ( 20) 00:14:44.233 1.600 - 1.607: 0.2346% ( 15) 00:14:44.233 1.607 - 1.615: 0.2457% ( 2) 00:14:44.233 1.615 - 1.622: 0.2513% ( 1) 00:14:44.233 1.622 - 1.629: 0.4356% ( 33) 00:14:44.233 1.629 - 1.636: 2.3848% ( 349) 00:14:44.233 1.636 - 1.644: 8.8802% ( 1163) 00:14:44.233 1.644 - 1.651: 14.8506% ( 1069) 00:14:44.233 1.651 - 1.658: 17.6543% ( 502) 00:14:44.233 1.658 - 1.665: 19.1120% ( 261) 00:14:44.233 1.665 - 1.673: 20.0838% ( 174) 00:14:44.233 1.673 - 1.680: 20.4245% ( 61) 00:14:44.233 1.680 - 1.687: 22.6026% ( 390) 00:14:44.233 1.687 - 1.695: 38.0508% ( 2766) 00:14:44.233 1.695 - 1.702: 67.6068% ( 5292) 00:14:44.233 1.702 - 1.709: 86.5233% ( 3387) 00:14:44.233 1.709 - 1.716: 92.8958% ( 1141) 00:14:44.233 1.716 - 1.724: 95.3588% ( 441) 00:14:44.233 1.724 - 1.731: 96.6211% ( 226) 00:14:44.233 1.731 - 1.738: 97.0343% ( 74) 00:14:44.233 1.738 - 1.745: 97.1907% ( 28) 00:14:44.233 1.745 - 1.753: 97.3639% ( 31) 00:14:44.233 1.753 - 1.760: 97.6264% ( 47) 00:14:44.233 1.760 - 1.767: 98.1067% ( 86) 00:14:44.233 1.767 - 1.775: 98.5088% ( 72) 00:14:44.233 1.775 - 1.782: 98.8942% ( 69) 00:14:44.233 1.782 - 1.789: 99.0952% ( 36) 00:14:44.233 1.789 - 1.796: 99.1678% ( 13) 00:14:44.234 1.796 - 1.804: 99.2181% ( 9) 00:14:44.234 1.804 - 1.811: 99.2237% ( 1) 00:14:44.234 1.811 - 1.818: 99.2293% ( 1) 00:14:44.234 1.818 - 1.825: 99.2349% ( 1) 00:14:44.234 1.825 - 1.833: 99.2404% ( 1) 00:14:44.234 1.847 - 1.855: 99.2460% ( 1) 00:14:44.234 1.855 - 1.862: 99.2516% ( 1) 00:14:44.234 1.862 - 1.876: 99.2572% ( 1) 00:14:44.234 1.876 - 1.891: 99.2628% ( 1) 00:14:44.234 1.935 - 1.949: 99.2684% ( 1) 00:14:44.234 2.007 - 2.022: 99.2739% ( 1) 00:14:44.234 2.022 - 2.036: 99.2795% ( 1) 00:14:44.234 2.036 - 2.051: 99.2851% ( 1) 00:14:44.234 2.095 - 2.109: 99.2907% ( 1) 00:14:44.234 2.225 - 2.240: 99.2963% ( 1) 00:14:44.234 2.240 - 2.255: 99.3019% ( 1) 00:14:44.234 2.255 - 2.269: 99.3075% ( 1) 00:14:44.234 3.360 - 3.375: 99.3130% ( 1) 00:14:44.234 3.636 - 3.651: 99.3186% ( 1) 00:14:44.234 3.680 - 3.695: 99.3242% ( 1) 00:14:44.234 3.695 - 3.709: 99.3298% ( 1) 00:14:44.234 3.724 - 3.753: 99.3354% ( 1) 00:14:44.234 3.782 - 3.811: 99.3410% ( 1) 00:14:44.234 4.102 - 4.131: 99.3466% ( 1) 00:14:44.234 4.160 - 4.189: 99.3521% ( 1) 00:14:44.234 4.422 - 4.451: 99.3577% ( 1) 00:14:44.234 4.451 - 4.480: 99.3633% ( 1) 00:14:44.234 4.684 - 4.713: 99.3745% ( 2) 00:14:44.234 4.713 - 4.742: 99.3856% ( 2) 00:14:44.234 4.800 - 4.829: 99.3912% ( 1) 00:14:44.234 4.858 - 4.887: 99.3968% ( 1) 00:14:44.234 5.033 - 5.062: 99.4024% ( 1) 00:14:44.234 5.178 - 5.207: 99.4080% ( 1) 00:14:44.234 5.207 - 5.236: 99.4136% ( 1) 00:14:44.234 5.236 - 5.265: 99.4192% ( 1) 00:14:44.234 5.265 - 5.295: 99.4247% ( 1) 00:14:44.234 5.295 - 5.324: 99.4303% ( 1) 00:14:44.234 5.353 - 5.382: 99.4359% ( 1) 00:14:44.234 5.702 - 5.731: 99.4415% ( 1) 00:14:44.234 5.964 - 5.993: 99.4471% ( 1) 00:14:44.234 6.080 - 6.109: 99.4527% ( 1) 00:14:44.234 6.138 - 6.167: 99.4583% ( 1) 00:14:44.234 6.371 - 6.400: 99.4638% ( 1) 00:14:44.234 6.633 - 6.6[2024-12-06 11:16:17.123844] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:44.234 62: 99.4694% ( 1) 00:14:44.234 6.720 - 6.749: 99.4750% ( 1) 00:14:44.234 6.778 - 6.807: 99.4806% ( 1) 00:14:44.234 6.807 - 6.836: 99.4862% ( 1) 00:14:44.234 7.127 - 7.156: 99.4918% ( 1) 00:14:44.234 12.160 - 12.218: 99.4973% ( 1) 00:14:44.234 14.138 - 14.196: 99.5029% ( 1) 00:14:44.234 118.225 - 118.691: 99.5085% ( 1) 00:14:44.234 1027.724 - 1035.171: 99.5141% ( 1) 00:14:44.234 3217.222 - 3232.116: 99.5197% ( 1) 00:14:44.234 3932.160 - 3961.949: 99.5253% ( 1) 00:14:44.234 3991.738 - 4021.527: 99.9944% ( 84) 00:14:44.234 5987.607 - 6017.396: 100.0000% ( 1) 00:14:44.234 00:14:44.234 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:44.234 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:44.234 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:44.234 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:44.234 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:44.493 [ 00:14:44.493 { 00:14:44.493 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:44.493 "subtype": "Discovery", 00:14:44.493 "listen_addresses": [], 00:14:44.493 "allow_any_host": true, 00:14:44.493 "hosts": [] 00:14:44.493 }, 00:14:44.493 { 00:14:44.493 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:44.493 "subtype": "NVMe", 00:14:44.493 "listen_addresses": [ 00:14:44.493 { 00:14:44.493 "trtype": "VFIOUSER", 00:14:44.493 "adrfam": "IPv4", 00:14:44.493 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:44.493 "trsvcid": "0" 00:14:44.493 } 00:14:44.493 ], 00:14:44.493 "allow_any_host": true, 00:14:44.493 "hosts": [], 00:14:44.493 "serial_number": "SPDK1", 00:14:44.493 "model_number": "SPDK bdev Controller", 00:14:44.493 "max_namespaces": 32, 00:14:44.493 "min_cntlid": 1, 00:14:44.493 "max_cntlid": 65519, 00:14:44.493 "namespaces": [ 00:14:44.493 { 00:14:44.493 "nsid": 1, 00:14:44.493 "bdev_name": "Malloc1", 00:14:44.493 "name": "Malloc1", 00:14:44.493 "nguid": "68CEF7883DE24B3DBD2722B290C9D757", 00:14:44.493 "uuid": "68cef788-3de2-4b3d-bd27-22b290c9d757" 00:14:44.493 }, 00:14:44.493 { 00:14:44.493 "nsid": 2, 00:14:44.493 "bdev_name": "Malloc3", 00:14:44.493 "name": "Malloc3", 00:14:44.493 "nguid": "2AF0B7FAE52C40B3BF6FE26626764355", 00:14:44.493 "uuid": "2af0b7fa-e52c-40b3-bf6f-e26626764355" 00:14:44.493 } 00:14:44.493 ] 00:14:44.493 }, 00:14:44.493 { 00:14:44.493 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:44.493 "subtype": "NVMe", 00:14:44.493 "listen_addresses": [ 00:14:44.493 { 00:14:44.493 "trtype": "VFIOUSER", 00:14:44.493 "adrfam": "IPv4", 00:14:44.493 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:44.493 "trsvcid": "0" 00:14:44.493 } 00:14:44.493 ], 00:14:44.493 "allow_any_host": true, 00:14:44.493 "hosts": [], 00:14:44.493 "serial_number": "SPDK2", 00:14:44.493 "model_number": "SPDK bdev Controller", 00:14:44.493 "max_namespaces": 32, 00:14:44.493 "min_cntlid": 1, 00:14:44.493 "max_cntlid": 65519, 00:14:44.493 "namespaces": [ 00:14:44.493 { 00:14:44.493 "nsid": 1, 00:14:44.493 "bdev_name": "Malloc2", 00:14:44.493 "name": "Malloc2", 00:14:44.493 "nguid": "A15F2B45A9C94180B96B52F88D8FCC76", 00:14:44.493 "uuid": "a15f2b45-a9c9-4180-b96b-52f88d8fcc76" 00:14:44.493 } 00:14:44.493 ] 00:14:44.493 } 00:14:44.493 ] 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1687540 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:44.493 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:44.751 [2024-12-06 11:16:17.496900] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:44.751 Malloc4 00:14:44.751 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:45.010 [2024-12-06 11:16:17.729633] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:45.010 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:45.010 Asynchronous Event Request test 00:14:45.010 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:45.010 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:45.010 Registering asynchronous event callbacks... 00:14:45.010 Starting namespace attribute notice tests for all controllers... 00:14:45.010 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:45.010 aer_cb - Changed Namespace 00:14:45.010 Cleaning up... 00:14:45.010 [ 00:14:45.010 { 00:14:45.010 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:45.010 "subtype": "Discovery", 00:14:45.010 "listen_addresses": [], 00:14:45.010 "allow_any_host": true, 00:14:45.010 "hosts": [] 00:14:45.010 }, 00:14:45.010 { 00:14:45.010 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:45.010 "subtype": "NVMe", 00:14:45.010 "listen_addresses": [ 00:14:45.010 { 00:14:45.010 "trtype": "VFIOUSER", 00:14:45.010 "adrfam": "IPv4", 00:14:45.010 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:45.010 "trsvcid": "0" 00:14:45.010 } 00:14:45.010 ], 00:14:45.010 "allow_any_host": true, 00:14:45.010 "hosts": [], 00:14:45.010 "serial_number": "SPDK1", 00:14:45.010 "model_number": "SPDK bdev Controller", 00:14:45.010 "max_namespaces": 32, 00:14:45.010 "min_cntlid": 1, 00:14:45.010 "max_cntlid": 65519, 00:14:45.010 "namespaces": [ 00:14:45.010 { 00:14:45.010 "nsid": 1, 00:14:45.010 "bdev_name": "Malloc1", 00:14:45.010 "name": "Malloc1", 00:14:45.010 "nguid": "68CEF7883DE24B3DBD2722B290C9D757", 00:14:45.010 "uuid": "68cef788-3de2-4b3d-bd27-22b290c9d757" 00:14:45.010 }, 00:14:45.010 { 00:14:45.010 "nsid": 2, 00:14:45.010 "bdev_name": "Malloc3", 00:14:45.010 "name": "Malloc3", 00:14:45.010 "nguid": "2AF0B7FAE52C40B3BF6FE26626764355", 00:14:45.010 "uuid": "2af0b7fa-e52c-40b3-bf6f-e26626764355" 00:14:45.010 } 00:14:45.010 ] 00:14:45.010 }, 00:14:45.010 { 00:14:45.010 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:45.010 "subtype": "NVMe", 00:14:45.010 "listen_addresses": [ 00:14:45.010 { 00:14:45.010 "trtype": "VFIOUSER", 00:14:45.010 "adrfam": "IPv4", 00:14:45.010 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:45.010 "trsvcid": "0" 00:14:45.010 } 00:14:45.010 ], 00:14:45.010 "allow_any_host": true, 00:14:45.010 "hosts": [], 00:14:45.010 "serial_number": "SPDK2", 00:14:45.010 "model_number": "SPDK bdev Controller", 00:14:45.010 "max_namespaces": 32, 00:14:45.010 "min_cntlid": 1, 00:14:45.010 "max_cntlid": 65519, 00:14:45.010 "namespaces": [ 00:14:45.010 { 00:14:45.010 "nsid": 1, 00:14:45.010 "bdev_name": "Malloc2", 00:14:45.010 "name": "Malloc2", 00:14:45.010 "nguid": "A15F2B45A9C94180B96B52F88D8FCC76", 00:14:45.010 "uuid": "a15f2b45-a9c9-4180-b96b-52f88d8fcc76" 00:14:45.010 }, 00:14:45.010 { 00:14:45.010 "nsid": 2, 00:14:45.010 "bdev_name": "Malloc4", 00:14:45.010 "name": "Malloc4", 00:14:45.010 "nguid": "971EC28C60FF4B51BB71B378108B1EAB", 00:14:45.010 "uuid": "971ec28c-60ff-4b51-bb71-b378108b1eab" 00:14:45.010 } 00:14:45.010 ] 00:14:45.010 } 00:14:45.010 ] 00:14:45.010 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1687540 00:14:45.010 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:45.010 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1679098 00:14:45.010 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1679098 ']' 00:14:45.010 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1679098 00:14:45.010 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:45.269 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.269 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679098 00:14:45.269 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.269 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.269 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679098' 00:14:45.269 killing process with pid 1679098 00:14:45.269 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1679098 00:14:45.269 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1679098 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1687805 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1687805' 00:14:45.527 Process pid: 1687805 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1687805 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1687805 ']' 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.527 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:45.528 [2024-12-06 11:16:18.284117] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:45.528 [2024-12-06 11:16:18.284931] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:14:45.528 [2024-12-06 11:16:18.284966] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.528 [2024-12-06 11:16:18.358912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.528 [2024-12-06 11:16:18.398219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.528 [2024-12-06 11:16:18.398257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.528 [2024-12-06 11:16:18.398263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.528 [2024-12-06 11:16:18.398269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.528 [2024-12-06 11:16:18.398274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.528 [2024-12-06 11:16:18.399663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.528 [2024-12-06 11:16:18.399695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.528 [2024-12-06 11:16:18.399807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.528 [2024-12-06 11:16:18.399809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.786 [2024-12-06 11:16:18.470891] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:45.786 [2024-12-06 11:16:18.471808] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:45.786 [2024-12-06 11:16:18.471856] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:45.786 [2024-12-06 11:16:18.472161] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:45.786 [2024-12-06 11:16:18.472172] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:46.354 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.354 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:46.354 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:47.290 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:47.550 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:47.550 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:47.550 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:47.550 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:47.550 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:47.809 Malloc1 00:14:47.809 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:47.809 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:48.068 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:48.329 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.329 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:48.329 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:48.588 Malloc2 00:14:48.588 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:48.588 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:48.847 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1687805 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1687805 ']' 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1687805 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1687805 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1687805' 00:14:49.107 killing process with pid 1687805 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1687805 00:14:49.107 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1687805 00:14:49.367 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:49.367 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:49.367 00:14:49.367 real 0m52.422s 00:14:49.367 user 3m20.628s 00:14:49.367 sys 0m3.032s 00:14:49.367 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.367 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:49.367 ************************************ 00:14:49.367 END TEST nvmf_vfio_user 00:14:49.367 ************************************ 00:14:49.367 11:16:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:49.368 11:16:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:49.368 11:16:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.368 11:16:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:49.368 ************************************ 00:14:49.368 START TEST nvmf_vfio_user_nvme_compliance 00:14:49.368 ************************************ 00:14:49.368 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:49.368 * Looking for test storage... 00:14:49.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:49.368 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:49.368 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:49.368 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:49.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.628 --rc genhtml_branch_coverage=1 00:14:49.628 --rc genhtml_function_coverage=1 00:14:49.628 --rc genhtml_legend=1 00:14:49.628 --rc geninfo_all_blocks=1 00:14:49.628 --rc geninfo_unexecuted_blocks=1 00:14:49.628 00:14:49.628 ' 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:49.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.628 --rc genhtml_branch_coverage=1 00:14:49.628 --rc genhtml_function_coverage=1 00:14:49.628 --rc genhtml_legend=1 00:14:49.628 --rc geninfo_all_blocks=1 00:14:49.628 --rc geninfo_unexecuted_blocks=1 00:14:49.628 00:14:49.628 ' 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:49.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.628 --rc genhtml_branch_coverage=1 00:14:49.628 --rc genhtml_function_coverage=1 00:14:49.628 --rc genhtml_legend=1 00:14:49.628 --rc geninfo_all_blocks=1 00:14:49.628 --rc geninfo_unexecuted_blocks=1 00:14:49.628 00:14:49.628 ' 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:49.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.628 --rc genhtml_branch_coverage=1 00:14:49.628 --rc genhtml_function_coverage=1 00:14:49.628 --rc genhtml_legend=1 00:14:49.628 --rc geninfo_all_blocks=1 00:14:49.628 --rc geninfo_unexecuted_blocks=1 00:14:49.628 00:14:49.628 ' 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.628 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:49.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1688521 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1688521' 00:14:49.629 Process pid: 1688521 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1688521 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1688521 ']' 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.629 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 [2024-12-06 11:16:22.415925] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:14:49.629 [2024-12-06 11:16:22.415972] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.629 [2024-12-06 11:16:22.488880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:49.629 [2024-12-06 11:16:22.527731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.629 [2024-12-06 11:16:22.527765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.629 [2024-12-06 11:16:22.527771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.629 [2024-12-06 11:16:22.527777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.629 [2024-12-06 11:16:22.527781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.629 [2024-12-06 11:16:22.529070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.629 [2024-12-06 11:16:22.529177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.629 [2024-12-06 11:16:22.529178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.568 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.568 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:50.568 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:51.507 malloc0 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.507 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:51.507 00:14:51.507 00:14:51.507 CUnit - A unit testing framework for C - Version 2.1-3 00:14:51.507 http://cunit.sourceforge.net/ 00:14:51.507 00:14:51.507 00:14:51.507 Suite: nvme_compliance 00:14:51.766 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 11:16:24.452472] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.766 [2024-12-06 11:16:24.453787] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:51.766 [2024-12-06 11:16:24.453801] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:51.766 [2024-12-06 11:16:24.453807] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:51.766 [2024-12-06 11:16:24.455489] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.766 passed 00:14:51.766 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 11:16:24.530003] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.766 [2024-12-06 11:16:24.533019] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.766 passed 00:14:51.766 Test: admin_identify_ns ...[2024-12-06 11:16:24.609656] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.766 [2024-12-06 11:16:24.670077] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:51.766 [2024-12-06 11:16:24.678078] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:51.766 [2024-12-06 11:16:24.699165] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.025 passed 00:14:52.025 Test: admin_get_features_mandatory_features ...[2024-12-06 11:16:24.771951] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.025 [2024-12-06 11:16:24.774972] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.025 passed 00:14:52.025 Test: admin_get_features_optional_features ...[2024-12-06 11:16:24.847455] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.025 [2024-12-06 11:16:24.850476] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.025 passed 00:14:52.025 Test: admin_set_features_number_of_queues ...[2024-12-06 11:16:24.924857] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.284 [2024-12-06 11:16:25.030152] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.284 passed 00:14:52.284 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 11:16:25.100897] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.284 [2024-12-06 11:16:25.103919] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.284 passed 00:14:52.284 Test: admin_get_log_page_with_lpo ...[2024-12-06 11:16:25.177129] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.543 [2024-12-06 11:16:25.247069] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:52.543 [2024-12-06 11:16:25.260123] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.543 passed 00:14:52.543 Test: fabric_property_get ...[2024-12-06 11:16:25.332970] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.543 [2024-12-06 11:16:25.336266] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:52.543 [2024-12-06 11:16:25.338001] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.543 passed 00:14:52.543 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 11:16:25.410478] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.543 [2024-12-06 11:16:25.411698] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:52.543 [2024-12-06 11:16:25.413495] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.543 passed 00:14:52.801 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 11:16:25.488650] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.801 [2024-12-06 11:16:25.573071] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:52.801 [2024-12-06 11:16:25.589068] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:52.801 [2024-12-06 11:16:25.594153] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.801 passed 00:14:52.801 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 11:16:25.664879] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.802 [2024-12-06 11:16:25.666114] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:52.802 [2024-12-06 11:16:25.667902] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.802 passed 00:14:53.061 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 11:16:25.744784] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.061 [2024-12-06 11:16:25.821073] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:53.061 [2024-12-06 11:16:25.845063] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:53.061 [2024-12-06 11:16:25.850144] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.061 passed 00:14:53.061 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 11:16:25.921847] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.061 [2024-12-06 11:16:25.923081] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:53.061 [2024-12-06 11:16:25.923118] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:53.061 [2024-12-06 11:16:25.924867] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.061 passed 00:14:53.061 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 11:16:25.997852] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.320 [2024-12-06 11:16:26.088064] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:53.320 [2024-12-06 11:16:26.096075] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:53.320 [2024-12-06 11:16:26.104061] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:53.320 [2024-12-06 11:16:26.112064] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:53.320 [2024-12-06 11:16:26.141140] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.320 passed 00:14:53.320 Test: admin_create_io_sq_verify_pc ...[2024-12-06 11:16:26.216920] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.320 [2024-12-06 11:16:26.233070] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:53.320 [2024-12-06 11:16:26.251092] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.579 passed 00:14:53.579 Test: admin_create_io_qp_max_qps ...[2024-12-06 11:16:26.323609] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.516 [2024-12-06 11:16:27.428067] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:55.084 [2024-12-06 11:16:27.815982] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.084 passed 00:14:55.084 Test: admin_create_io_sq_shared_cq ...[2024-12-06 11:16:27.894896] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.343 [2024-12-06 11:16:28.027079] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:55.343 [2024-12-06 11:16:28.064145] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.343 passed 00:14:55.343 00:14:55.343 Run Summary: Type Total Ran Passed Failed Inactive 00:14:55.343 suites 1 1 n/a 0 0 00:14:55.343 tests 18 18 18 0 0 00:14:55.343 asserts 360 360 360 0 n/a 00:14:55.343 00:14:55.343 Elapsed time = 1.480 seconds 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1688521 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1688521 ']' 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1688521 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1688521 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1688521' 00:14:55.343 killing process with pid 1688521 00:14:55.343 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1688521 00:14:55.344 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1688521 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:55.603 00:14:55.603 real 0m6.188s 00:14:55.603 user 0m17.537s 00:14:55.603 sys 0m0.550s 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:55.603 ************************************ 00:14:55.603 END TEST nvmf_vfio_user_nvme_compliance 00:14:55.603 ************************************ 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.603 ************************************ 00:14:55.603 START TEST nvmf_vfio_user_fuzz 00:14:55.603 ************************************ 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:55.603 * Looking for test storage... 00:14:55.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:55.603 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:55.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.863 --rc genhtml_branch_coverage=1 00:14:55.863 --rc genhtml_function_coverage=1 00:14:55.863 --rc genhtml_legend=1 00:14:55.863 --rc geninfo_all_blocks=1 00:14:55.863 --rc geninfo_unexecuted_blocks=1 00:14:55.863 00:14:55.863 ' 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:55.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.863 --rc genhtml_branch_coverage=1 00:14:55.863 --rc genhtml_function_coverage=1 00:14:55.863 --rc genhtml_legend=1 00:14:55.863 --rc geninfo_all_blocks=1 00:14:55.863 --rc geninfo_unexecuted_blocks=1 00:14:55.863 00:14:55.863 ' 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:55.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.863 --rc genhtml_branch_coverage=1 00:14:55.863 --rc genhtml_function_coverage=1 00:14:55.863 --rc genhtml_legend=1 00:14:55.863 --rc geninfo_all_blocks=1 00:14:55.863 --rc geninfo_unexecuted_blocks=1 00:14:55.863 00:14:55.863 ' 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:55.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.863 --rc genhtml_branch_coverage=1 00:14:55.863 --rc genhtml_function_coverage=1 00:14:55.863 --rc genhtml_legend=1 00:14:55.863 --rc geninfo_all_blocks=1 00:14:55.863 --rc geninfo_unexecuted_blocks=1 00:14:55.863 00:14:55.863 ' 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.863 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1689791 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1689791' 00:14:55.864 Process pid: 1689791 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1689791 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1689791 ']' 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.864 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:56.124 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.124 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:56.124 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.063 malloc0 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:57.063 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:29.133 Fuzzing completed. Shutting down the fuzz application 00:15:29.133 00:15:29.133 Dumping successful admin opcodes: 00:15:29.133 9, 10, 00:15:29.133 Dumping successful io opcodes: 00:15:29.133 0, 00:15:29.133 NS: 0x20000081ef00 I/O qp, Total commands completed: 1107787, total successful commands: 4365, random_seed: 2762651392 00:15:29.133 NS: 0x20000081ef00 admin qp, Total commands completed: 272240, total successful commands: 64, random_seed: 2041599296 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1689791 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1689791 ']' 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1689791 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1689791 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.133 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1689791' 00:15:29.133 killing process with pid 1689791 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1689791 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1689791 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:29.134 00:15:29.134 real 0m32.228s 00:15:29.134 user 0m30.104s 00:15:29.134 sys 0m31.020s 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:29.134 ************************************ 00:15:29.134 END TEST nvmf_vfio_user_fuzz 00:15:29.134 ************************************ 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.134 ************************************ 00:15:29.134 START TEST nvmf_auth_target 00:15:29.134 ************************************ 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:29.134 * Looking for test storage... 00:15:29.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:29.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.134 --rc genhtml_branch_coverage=1 00:15:29.134 --rc genhtml_function_coverage=1 00:15:29.134 --rc genhtml_legend=1 00:15:29.134 --rc geninfo_all_blocks=1 00:15:29.134 --rc geninfo_unexecuted_blocks=1 00:15:29.134 00:15:29.134 ' 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:29.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.134 --rc genhtml_branch_coverage=1 00:15:29.134 --rc genhtml_function_coverage=1 00:15:29.134 --rc genhtml_legend=1 00:15:29.134 --rc geninfo_all_blocks=1 00:15:29.134 --rc geninfo_unexecuted_blocks=1 00:15:29.134 00:15:29.134 ' 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:29.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.134 --rc genhtml_branch_coverage=1 00:15:29.134 --rc genhtml_function_coverage=1 00:15:29.134 --rc genhtml_legend=1 00:15:29.134 --rc geninfo_all_blocks=1 00:15:29.134 --rc geninfo_unexecuted_blocks=1 00:15:29.134 00:15:29.134 ' 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:29.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.134 --rc genhtml_branch_coverage=1 00:15:29.134 --rc genhtml_function_coverage=1 00:15:29.134 --rc genhtml_legend=1 00:15:29.134 --rc geninfo_all_blocks=1 00:15:29.134 --rc geninfo_unexecuted_blocks=1 00:15:29.134 00:15:29.134 ' 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.134 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:29.135 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.411 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:34.412 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:34.412 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:34.412 Found net devices under 0000:af:00.0: cvl_0_0 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:34.412 Found net devices under 0000:af:00.1: cvl_0_1 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:34.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:15:34.412 00:15:34.412 --- 10.0.0.2 ping statistics --- 00:15:34.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.412 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:15:34.412 00:15:34.412 --- 10.0.0.1 ping statistics --- 00:15:34.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.412 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1698720 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1698720 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1698720 ']' 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.412 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.413 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1698742 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=20f68bed5de1577f0b2bae89334c3e2d7e62fb034da56ddf 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yFx 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 20f68bed5de1577f0b2bae89334c3e2d7e62fb034da56ddf 0 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 20f68bed5de1577f0b2bae89334c3e2d7e62fb034da56ddf 0 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=20f68bed5de1577f0b2bae89334c3e2d7e62fb034da56ddf 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yFx 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yFx 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.yFx 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4c242871867a3f5e50aace26d5b3a4d6f3cdc0585379d669b253ec6d6ab25330 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7xi 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4c242871867a3f5e50aace26d5b3a4d6f3cdc0585379d669b253ec6d6ab25330 3 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4c242871867a3f5e50aace26d5b3a4d6f3cdc0585379d669b253ec6d6ab25330 3 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4c242871867a3f5e50aace26d5b3a4d6f3cdc0585379d669b253ec6d6ab25330 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7xi 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7xi 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.7xi 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a39c3f9bf2db42a3eb62daa3b9ceff50 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.pUK 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a39c3f9bf2db42a3eb62daa3b9ceff50 1 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a39c3f9bf2db42a3eb62daa3b9ceff50 1 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a39c3f9bf2db42a3eb62daa3b9ceff50 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:34.413 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.pUK 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.pUK 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.pUK 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eb3607a74cf6e532a5122fe13ec689e2d52b3f1d1083037a 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Rh8 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eb3607a74cf6e532a5122fe13ec689e2d52b3f1d1083037a 2 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eb3607a74cf6e532a5122fe13ec689e2d52b3f1d1083037a 2 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eb3607a74cf6e532a5122fe13ec689e2d52b3f1d1083037a 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Rh8 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Rh8 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Rh8 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aeac06debca071063909b0d7d8fd647cd9bbe65ae8866817 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XoO 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aeac06debca071063909b0d7d8fd647cd9bbe65ae8866817 2 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aeac06debca071063909b0d7d8fd647cd9bbe65ae8866817 2 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aeac06debca071063909b0d7d8fd647cd9bbe65ae8866817 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XoO 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XoO 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.XoO 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=62c3dff139abb1f5cdea0241eea114bc 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RNi 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 62c3dff139abb1f5cdea0241eea114bc 1 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 62c3dff139abb1f5cdea0241eea114bc 1 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=62c3dff139abb1f5cdea0241eea114bc 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RNi 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RNi 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.RNi 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7b02b4520d521c10fb1c208202af5d1a8acbc78508236f265837754caf17b42d 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oFq 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7b02b4520d521c10fb1c208202af5d1a8acbc78508236f265837754caf17b42d 3 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7b02b4520d521c10fb1c208202af5d1a8acbc78508236f265837754caf17b42d 3 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7b02b4520d521c10fb1c208202af5d1a8acbc78508236f265837754caf17b42d 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oFq 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oFq 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.oFq 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1698720 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1698720 ']' 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.673 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.932 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.932 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:34.932 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1698742 /var/tmp/host.sock 00:15:34.932 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1698742 ']' 00:15:34.932 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:34.932 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.932 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:34.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:34.932 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.932 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yFx 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yFx 00:15:35.191 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yFx 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.7xi ]] 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7xi 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7xi 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7xi 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pUK 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.pUK 00:15:35.451 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.pUK 00:15:35.710 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Rh8 ]] 00:15:35.710 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rh8 00:15:35.710 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.710 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.710 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.710 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rh8 00:15:35.710 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rh8 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.XoO 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.XoO 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.XoO 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.RNi ]] 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RNi 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.969 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.970 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.970 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RNi 00:15:35.970 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RNi 00:15:36.228 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:36.228 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oFq 00:15:36.228 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.228 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.228 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.228 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oFq 00:15:36.228 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oFq 00:15:36.488 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:36.488 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:36.488 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.488 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.488 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.488 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.748 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.748 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.007 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.007 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.007 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.007 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.007 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.007 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.007 { 00:15:37.007 "cntlid": 1, 00:15:37.007 "qid": 0, 00:15:37.007 "state": "enabled", 00:15:37.007 "thread": "nvmf_tgt_poll_group_000", 00:15:37.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:37.007 "listen_address": { 00:15:37.007 "trtype": "TCP", 00:15:37.007 "adrfam": "IPv4", 00:15:37.007 "traddr": "10.0.0.2", 00:15:37.007 "trsvcid": "4420" 00:15:37.007 }, 00:15:37.007 "peer_address": { 00:15:37.007 "trtype": "TCP", 00:15:37.007 "adrfam": "IPv4", 00:15:37.007 "traddr": "10.0.0.1", 00:15:37.007 "trsvcid": "51170" 00:15:37.007 }, 00:15:37.007 "auth": { 00:15:37.007 "state": "completed", 00:15:37.007 "digest": "sha256", 00:15:37.007 "dhgroup": "null" 00:15:37.007 } 00:15:37.007 } 00:15:37.007 ]' 00:15:37.007 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.007 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.007 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.267 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:37.267 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.267 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.267 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.267 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.267 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:15:37.267 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:15:37.835 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.835 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:37.835 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.835 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.835 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.835 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.835 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.835 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.095 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.355 00:15:38.355 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.355 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.355 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.615 { 00:15:38.615 "cntlid": 3, 00:15:38.615 "qid": 0, 00:15:38.615 "state": "enabled", 00:15:38.615 "thread": "nvmf_tgt_poll_group_000", 00:15:38.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:38.615 "listen_address": { 00:15:38.615 "trtype": "TCP", 00:15:38.615 "adrfam": "IPv4", 00:15:38.615 "traddr": "10.0.0.2", 00:15:38.615 "trsvcid": "4420" 00:15:38.615 }, 00:15:38.615 "peer_address": { 00:15:38.615 "trtype": "TCP", 00:15:38.615 "adrfam": "IPv4", 00:15:38.615 "traddr": "10.0.0.1", 00:15:38.615 "trsvcid": "51182" 00:15:38.615 }, 00:15:38.615 "auth": { 00:15:38.615 "state": "completed", 00:15:38.615 "digest": "sha256", 00:15:38.615 "dhgroup": "null" 00:15:38.615 } 00:15:38.615 } 00:15:38.615 ]' 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.615 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.874 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:15:38.874 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:15:39.443 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.443 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:39.443 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.443 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.443 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.443 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.443 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.443 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.703 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.962 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.962 { 00:15:39.962 "cntlid": 5, 00:15:39.962 "qid": 0, 00:15:39.962 "state": "enabled", 00:15:39.962 "thread": "nvmf_tgt_poll_group_000", 00:15:39.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:39.962 "listen_address": { 00:15:39.962 "trtype": "TCP", 00:15:39.962 "adrfam": "IPv4", 00:15:39.962 "traddr": "10.0.0.2", 00:15:39.962 "trsvcid": "4420" 00:15:39.962 }, 00:15:39.962 "peer_address": { 00:15:39.962 "trtype": "TCP", 00:15:39.962 "adrfam": "IPv4", 00:15:39.962 "traddr": "10.0.0.1", 00:15:39.962 "trsvcid": "51198" 00:15:39.962 }, 00:15:39.962 "auth": { 00:15:39.962 "state": "completed", 00:15:39.962 "digest": "sha256", 00:15:39.962 "dhgroup": "null" 00:15:39.962 } 00:15:39.962 } 00:15:39.962 ]' 00:15:39.962 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.222 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.222 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.222 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:40.222 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.222 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.222 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.222 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.507 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:15:40.507 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:15:40.765 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.025 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.284 00:15:41.284 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.284 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.284 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.542 { 00:15:41.542 "cntlid": 7, 00:15:41.542 "qid": 0, 00:15:41.542 "state": "enabled", 00:15:41.542 "thread": "nvmf_tgt_poll_group_000", 00:15:41.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:41.542 "listen_address": { 00:15:41.542 "trtype": "TCP", 00:15:41.542 "adrfam": "IPv4", 00:15:41.542 "traddr": "10.0.0.2", 00:15:41.542 "trsvcid": "4420" 00:15:41.542 }, 00:15:41.542 "peer_address": { 00:15:41.542 "trtype": "TCP", 00:15:41.542 "adrfam": "IPv4", 00:15:41.542 "traddr": "10.0.0.1", 00:15:41.542 "trsvcid": "56916" 00:15:41.542 }, 00:15:41.542 "auth": { 00:15:41.542 "state": "completed", 00:15:41.542 "digest": "sha256", 00:15:41.542 "dhgroup": "null" 00:15:41.542 } 00:15:41.542 } 00:15:41.542 ]' 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.542 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.801 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:15:41.801 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:15:42.366 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.366 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:42.366 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.366 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.366 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.366 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.366 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.366 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.366 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.623 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:42.623 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.623 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.623 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:42.623 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.623 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.623 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.624 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.624 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.624 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.624 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.624 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.624 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.881 00:15:42.881 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.881 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.881 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.882 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.882 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.882 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.882 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.882 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.882 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.882 { 00:15:42.882 "cntlid": 9, 00:15:42.882 "qid": 0, 00:15:42.882 "state": "enabled", 00:15:42.882 "thread": "nvmf_tgt_poll_group_000", 00:15:42.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:42.882 "listen_address": { 00:15:42.882 "trtype": "TCP", 00:15:42.882 "adrfam": "IPv4", 00:15:42.882 "traddr": "10.0.0.2", 00:15:42.882 "trsvcid": "4420" 00:15:42.882 }, 00:15:42.882 "peer_address": { 00:15:42.882 "trtype": "TCP", 00:15:42.882 "adrfam": "IPv4", 00:15:42.882 "traddr": "10.0.0.1", 00:15:42.882 "trsvcid": "56948" 00:15:42.882 }, 00:15:42.882 "auth": { 00:15:42.882 "state": "completed", 00:15:42.882 "digest": "sha256", 00:15:42.882 "dhgroup": "ffdhe2048" 00:15:42.882 } 00:15:42.882 } 00:15:42.882 ]' 00:15:42.882 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.139 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.139 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.139 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.139 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.139 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.139 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.139 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.398 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:15:43.398 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.964 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.222 00:15:44.222 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.222 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.222 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.479 { 00:15:44.479 "cntlid": 11, 00:15:44.479 "qid": 0, 00:15:44.479 "state": "enabled", 00:15:44.479 "thread": "nvmf_tgt_poll_group_000", 00:15:44.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:44.479 "listen_address": { 00:15:44.479 "trtype": "TCP", 00:15:44.479 "adrfam": "IPv4", 00:15:44.479 "traddr": "10.0.0.2", 00:15:44.479 "trsvcid": "4420" 00:15:44.479 }, 00:15:44.479 "peer_address": { 00:15:44.479 "trtype": "TCP", 00:15:44.479 "adrfam": "IPv4", 00:15:44.479 "traddr": "10.0.0.1", 00:15:44.479 "trsvcid": "56970" 00:15:44.479 }, 00:15:44.479 "auth": { 00:15:44.479 "state": "completed", 00:15:44.479 "digest": "sha256", 00:15:44.479 "dhgroup": "ffdhe2048" 00:15:44.479 } 00:15:44.479 } 00:15:44.479 ]' 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.479 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.480 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.480 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.480 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.480 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.738 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:15:44.738 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:15:45.307 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.307 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:45.307 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.307 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.307 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.307 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.307 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.307 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.567 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.851 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.851 { 00:15:45.851 "cntlid": 13, 00:15:45.851 "qid": 0, 00:15:45.851 "state": "enabled", 00:15:45.851 "thread": "nvmf_tgt_poll_group_000", 00:15:45.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:45.851 "listen_address": { 00:15:45.851 "trtype": "TCP", 00:15:45.851 "adrfam": "IPv4", 00:15:45.851 "traddr": "10.0.0.2", 00:15:45.851 "trsvcid": "4420" 00:15:45.851 }, 00:15:45.851 "peer_address": { 00:15:45.851 "trtype": "TCP", 00:15:45.851 "adrfam": "IPv4", 00:15:45.851 "traddr": "10.0.0.1", 00:15:45.851 "trsvcid": "56992" 00:15:45.851 }, 00:15:45.851 "auth": { 00:15:45.851 "state": "completed", 00:15:45.851 "digest": "sha256", 00:15:45.851 "dhgroup": "ffdhe2048" 00:15:45.851 } 00:15:45.851 } 00:15:45.851 ]' 00:15:45.851 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.151 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.151 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.151 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:46.151 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.151 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.151 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.151 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.151 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:15:46.151 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:15:46.744 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.744 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:46.744 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.744 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.744 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.744 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.744 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.744 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:47.002 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:47.002 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.002 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.002 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:47.002 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.002 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.003 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:15:47.003 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.003 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.003 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.003 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.003 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.003 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.261 00:15:47.261 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.261 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.261 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.519 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.519 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.520 { 00:15:47.520 "cntlid": 15, 00:15:47.520 "qid": 0, 00:15:47.520 "state": "enabled", 00:15:47.520 "thread": "nvmf_tgt_poll_group_000", 00:15:47.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:47.520 "listen_address": { 00:15:47.520 "trtype": "TCP", 00:15:47.520 "adrfam": "IPv4", 00:15:47.520 "traddr": "10.0.0.2", 00:15:47.520 "trsvcid": "4420" 00:15:47.520 }, 00:15:47.520 "peer_address": { 00:15:47.520 "trtype": "TCP", 00:15:47.520 "adrfam": "IPv4", 00:15:47.520 "traddr": "10.0.0.1", 00:15:47.520 "trsvcid": "57016" 00:15:47.520 }, 00:15:47.520 "auth": { 00:15:47.520 "state": "completed", 00:15:47.520 "digest": "sha256", 00:15:47.520 "dhgroup": "ffdhe2048" 00:15:47.520 } 00:15:47.520 } 00:15:47.520 ]' 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.520 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.779 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:15:47.779 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:15:48.345 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.345 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:48.345 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.345 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.345 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.345 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.345 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.345 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.345 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.603 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.603 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.862 { 00:15:48.862 "cntlid": 17, 00:15:48.862 "qid": 0, 00:15:48.862 "state": "enabled", 00:15:48.862 "thread": "nvmf_tgt_poll_group_000", 00:15:48.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:48.862 "listen_address": { 00:15:48.862 "trtype": "TCP", 00:15:48.862 "adrfam": "IPv4", 00:15:48.862 "traddr": "10.0.0.2", 00:15:48.862 "trsvcid": "4420" 00:15:48.862 }, 00:15:48.862 "peer_address": { 00:15:48.862 "trtype": "TCP", 00:15:48.862 "adrfam": "IPv4", 00:15:48.862 "traddr": "10.0.0.1", 00:15:48.862 "trsvcid": "57036" 00:15:48.862 }, 00:15:48.862 "auth": { 00:15:48.862 "state": "completed", 00:15:48.862 "digest": "sha256", 00:15:48.862 "dhgroup": "ffdhe3072" 00:15:48.862 } 00:15:48.862 } 00:15:48.862 ]' 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.862 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.122 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.122 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.122 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.122 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.122 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.381 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:15:49.381 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:15:49.949 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.949 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.950 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.209 00:15:50.209 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.209 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.209 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.468 { 00:15:50.468 "cntlid": 19, 00:15:50.468 "qid": 0, 00:15:50.468 "state": "enabled", 00:15:50.468 "thread": "nvmf_tgt_poll_group_000", 00:15:50.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:50.468 "listen_address": { 00:15:50.468 "trtype": "TCP", 00:15:50.468 "adrfam": "IPv4", 00:15:50.468 "traddr": "10.0.0.2", 00:15:50.468 "trsvcid": "4420" 00:15:50.468 }, 00:15:50.468 "peer_address": { 00:15:50.468 "trtype": "TCP", 00:15:50.468 "adrfam": "IPv4", 00:15:50.468 "traddr": "10.0.0.1", 00:15:50.468 "trsvcid": "57050" 00:15:50.468 }, 00:15:50.468 "auth": { 00:15:50.468 "state": "completed", 00:15:50.468 "digest": "sha256", 00:15:50.468 "dhgroup": "ffdhe3072" 00:15:50.468 } 00:15:50.468 } 00:15:50.468 ]' 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.468 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.728 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:15:50.728 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:15:51.295 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.295 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:51.295 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.295 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.295 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.295 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.295 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.295 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.554 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.813 00:15:51.813 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.813 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.813 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.071 { 00:15:52.071 "cntlid": 21, 00:15:52.071 "qid": 0, 00:15:52.071 "state": "enabled", 00:15:52.071 "thread": "nvmf_tgt_poll_group_000", 00:15:52.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:52.071 "listen_address": { 00:15:52.071 "trtype": "TCP", 00:15:52.071 "adrfam": "IPv4", 00:15:52.071 "traddr": "10.0.0.2", 00:15:52.071 "trsvcid": "4420" 00:15:52.071 }, 00:15:52.071 "peer_address": { 00:15:52.071 "trtype": "TCP", 00:15:52.071 "adrfam": "IPv4", 00:15:52.071 "traddr": "10.0.0.1", 00:15:52.071 "trsvcid": "52496" 00:15:52.071 }, 00:15:52.071 "auth": { 00:15:52.071 "state": "completed", 00:15:52.071 "digest": "sha256", 00:15:52.071 "dhgroup": "ffdhe3072" 00:15:52.071 } 00:15:52.071 } 00:15:52.071 ]' 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.071 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.330 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:15:52.330 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.895 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.153 00:15:53.153 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.153 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.153 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.412 { 00:15:53.412 "cntlid": 23, 00:15:53.412 "qid": 0, 00:15:53.412 "state": "enabled", 00:15:53.412 "thread": "nvmf_tgt_poll_group_000", 00:15:53.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:53.412 "listen_address": { 00:15:53.412 "trtype": "TCP", 00:15:53.412 "adrfam": "IPv4", 00:15:53.412 "traddr": "10.0.0.2", 00:15:53.412 "trsvcid": "4420" 00:15:53.412 }, 00:15:53.412 "peer_address": { 00:15:53.412 "trtype": "TCP", 00:15:53.412 "adrfam": "IPv4", 00:15:53.412 "traddr": "10.0.0.1", 00:15:53.412 "trsvcid": "52532" 00:15:53.412 }, 00:15:53.412 "auth": { 00:15:53.412 "state": "completed", 00:15:53.412 "digest": "sha256", 00:15:53.412 "dhgroup": "ffdhe3072" 00:15:53.412 } 00:15:53.412 } 00:15:53.412 ]' 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.412 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.670 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.670 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.670 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.670 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:15:53.670 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:15:54.238 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.238 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:54.238 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.238 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.238 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.238 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.238 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.238 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.238 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.496 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.755 00:15:54.755 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.755 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.755 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.014 { 00:15:55.014 "cntlid": 25, 00:15:55.014 "qid": 0, 00:15:55.014 "state": "enabled", 00:15:55.014 "thread": "nvmf_tgt_poll_group_000", 00:15:55.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:55.014 "listen_address": { 00:15:55.014 "trtype": "TCP", 00:15:55.014 "adrfam": "IPv4", 00:15:55.014 "traddr": "10.0.0.2", 00:15:55.014 "trsvcid": "4420" 00:15:55.014 }, 00:15:55.014 "peer_address": { 00:15:55.014 "trtype": "TCP", 00:15:55.014 "adrfam": "IPv4", 00:15:55.014 "traddr": "10.0.0.1", 00:15:55.014 "trsvcid": "52558" 00:15:55.014 }, 00:15:55.014 "auth": { 00:15:55.014 "state": "completed", 00:15:55.014 "digest": "sha256", 00:15:55.014 "dhgroup": "ffdhe4096" 00:15:55.014 } 00:15:55.014 } 00:15:55.014 ]' 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.014 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.273 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:15:55.273 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:15:55.838 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.838 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:55.838 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.838 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.838 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.838 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.838 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.839 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.097 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.357 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.357 { 00:15:56.357 "cntlid": 27, 00:15:56.357 "qid": 0, 00:15:56.357 "state": "enabled", 00:15:56.357 "thread": "nvmf_tgt_poll_group_000", 00:15:56.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:56.357 "listen_address": { 00:15:56.357 "trtype": "TCP", 00:15:56.357 "adrfam": "IPv4", 00:15:56.357 "traddr": "10.0.0.2", 00:15:56.357 "trsvcid": "4420" 00:15:56.357 }, 00:15:56.357 "peer_address": { 00:15:56.357 "trtype": "TCP", 00:15:56.357 "adrfam": "IPv4", 00:15:56.357 "traddr": "10.0.0.1", 00:15:56.357 "trsvcid": "52590" 00:15:56.357 }, 00:15:56.357 "auth": { 00:15:56.357 "state": "completed", 00:15:56.357 "digest": "sha256", 00:15:56.357 "dhgroup": "ffdhe4096" 00:15:56.357 } 00:15:56.357 } 00:15:56.357 ]' 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.357 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.616 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.616 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.616 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.616 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.616 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.616 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:15:56.616 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:15:57.184 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.184 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:57.184 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.184 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.184 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.184 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.184 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.184 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.442 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:57.442 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.443 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.701 00:15:57.701 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.701 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.701 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.960 { 00:15:57.960 "cntlid": 29, 00:15:57.960 "qid": 0, 00:15:57.960 "state": "enabled", 00:15:57.960 "thread": "nvmf_tgt_poll_group_000", 00:15:57.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:57.960 "listen_address": { 00:15:57.960 "trtype": "TCP", 00:15:57.960 "adrfam": "IPv4", 00:15:57.960 "traddr": "10.0.0.2", 00:15:57.960 "trsvcid": "4420" 00:15:57.960 }, 00:15:57.960 "peer_address": { 00:15:57.960 "trtype": "TCP", 00:15:57.960 "adrfam": "IPv4", 00:15:57.960 "traddr": "10.0.0.1", 00:15:57.960 "trsvcid": "52622" 00:15:57.960 }, 00:15:57.960 "auth": { 00:15:57.960 "state": "completed", 00:15:57.960 "digest": "sha256", 00:15:57.960 "dhgroup": "ffdhe4096" 00:15:57.960 } 00:15:57.960 } 00:15:57.960 ]' 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.960 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.219 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:15:58.219 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:15:58.787 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.787 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:58.787 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.787 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.787 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.787 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.787 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.787 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.046 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.305 00:15:59.305 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.305 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.305 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.564 { 00:15:59.564 "cntlid": 31, 00:15:59.564 "qid": 0, 00:15:59.564 "state": "enabled", 00:15:59.564 "thread": "nvmf_tgt_poll_group_000", 00:15:59.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:59.564 "listen_address": { 00:15:59.564 "trtype": "TCP", 00:15:59.564 "adrfam": "IPv4", 00:15:59.564 "traddr": "10.0.0.2", 00:15:59.564 "trsvcid": "4420" 00:15:59.564 }, 00:15:59.564 "peer_address": { 00:15:59.564 "trtype": "TCP", 00:15:59.564 "adrfam": "IPv4", 00:15:59.564 "traddr": "10.0.0.1", 00:15:59.564 "trsvcid": "52640" 00:15:59.564 }, 00:15:59.564 "auth": { 00:15:59.564 "state": "completed", 00:15:59.564 "digest": "sha256", 00:15:59.564 "dhgroup": "ffdhe4096" 00:15:59.564 } 00:15:59.564 } 00:15:59.564 ]' 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.564 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.823 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:15:59.823 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:00.390 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.390 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:00.390 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.390 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.390 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.390 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.390 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.390 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:00.390 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:00.647 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:00.647 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.647 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.647 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:00.647 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.647 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.647 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.647 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.647 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.648 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.648 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.648 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.648 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.905 00:16:00.905 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.905 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.905 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.163 { 00:16:01.163 "cntlid": 33, 00:16:01.163 "qid": 0, 00:16:01.163 "state": "enabled", 00:16:01.163 "thread": "nvmf_tgt_poll_group_000", 00:16:01.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:01.163 "listen_address": { 00:16:01.163 "trtype": "TCP", 00:16:01.163 "adrfam": "IPv4", 00:16:01.163 "traddr": "10.0.0.2", 00:16:01.163 "trsvcid": "4420" 00:16:01.163 }, 00:16:01.163 "peer_address": { 00:16:01.163 "trtype": "TCP", 00:16:01.163 "adrfam": "IPv4", 00:16:01.163 "traddr": "10.0.0.1", 00:16:01.163 "trsvcid": "34048" 00:16:01.163 }, 00:16:01.163 "auth": { 00:16:01.163 "state": "completed", 00:16:01.163 "digest": "sha256", 00:16:01.163 "dhgroup": "ffdhe6144" 00:16:01.163 } 00:16:01.163 } 00:16:01.163 ]' 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.163 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.163 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.163 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.421 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:01.421 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.988 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.247 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.247 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.247 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.247 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.247 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.525 00:16:02.525 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.525 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.525 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.782 { 00:16:02.782 "cntlid": 35, 00:16:02.782 "qid": 0, 00:16:02.782 "state": "enabled", 00:16:02.782 "thread": "nvmf_tgt_poll_group_000", 00:16:02.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:02.782 "listen_address": { 00:16:02.782 "trtype": "TCP", 00:16:02.782 "adrfam": "IPv4", 00:16:02.782 "traddr": "10.0.0.2", 00:16:02.782 "trsvcid": "4420" 00:16:02.782 }, 00:16:02.782 "peer_address": { 00:16:02.782 "trtype": "TCP", 00:16:02.782 "adrfam": "IPv4", 00:16:02.782 "traddr": "10.0.0.1", 00:16:02.782 "trsvcid": "34084" 00:16:02.782 }, 00:16:02.782 "auth": { 00:16:02.782 "state": "completed", 00:16:02.782 "digest": "sha256", 00:16:02.782 "dhgroup": "ffdhe6144" 00:16:02.782 } 00:16:02.782 } 00:16:02.782 ]' 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.782 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.040 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:03.040 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.605 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.170 00:16:04.170 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.170 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.170 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.170 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.170 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.170 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.170 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.170 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.170 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.170 { 00:16:04.170 "cntlid": 37, 00:16:04.170 "qid": 0, 00:16:04.170 "state": "enabled", 00:16:04.170 "thread": "nvmf_tgt_poll_group_000", 00:16:04.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:04.170 "listen_address": { 00:16:04.170 "trtype": "TCP", 00:16:04.170 "adrfam": "IPv4", 00:16:04.170 "traddr": "10.0.0.2", 00:16:04.170 "trsvcid": "4420" 00:16:04.170 }, 00:16:04.170 "peer_address": { 00:16:04.170 "trtype": "TCP", 00:16:04.170 "adrfam": "IPv4", 00:16:04.170 "traddr": "10.0.0.1", 00:16:04.170 "trsvcid": "34102" 00:16:04.170 }, 00:16:04.170 "auth": { 00:16:04.170 "state": "completed", 00:16:04.170 "digest": "sha256", 00:16:04.170 "dhgroup": "ffdhe6144" 00:16:04.170 } 00:16:04.170 } 00:16:04.170 ]' 00:16:04.170 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.170 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.170 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.427 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.427 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.427 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.427 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.427 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.427 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:04.427 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:04.992 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.992 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:04.992 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.992 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.992 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.992 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.992 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.992 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.251 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.509 00:16:05.509 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.509 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.509 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.767 { 00:16:05.767 "cntlid": 39, 00:16:05.767 "qid": 0, 00:16:05.767 "state": "enabled", 00:16:05.767 "thread": "nvmf_tgt_poll_group_000", 00:16:05.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:05.767 "listen_address": { 00:16:05.767 "trtype": "TCP", 00:16:05.767 "adrfam": "IPv4", 00:16:05.767 "traddr": "10.0.0.2", 00:16:05.767 "trsvcid": "4420" 00:16:05.767 }, 00:16:05.767 "peer_address": { 00:16:05.767 "trtype": "TCP", 00:16:05.767 "adrfam": "IPv4", 00:16:05.767 "traddr": "10.0.0.1", 00:16:05.767 "trsvcid": "34124" 00:16:05.767 }, 00:16:05.767 "auth": { 00:16:05.767 "state": "completed", 00:16:05.767 "digest": "sha256", 00:16:05.767 "dhgroup": "ffdhe6144" 00:16:05.767 } 00:16:05.767 } 00:16:05.767 ]' 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.767 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.026 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.026 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.026 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.026 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:06.026 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:06.594 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.594 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:06.594 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.594 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.594 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.594 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.594 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.594 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.594 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.854 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.422 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.422 { 00:16:07.422 "cntlid": 41, 00:16:07.422 "qid": 0, 00:16:07.422 "state": "enabled", 00:16:07.422 "thread": "nvmf_tgt_poll_group_000", 00:16:07.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:07.422 "listen_address": { 00:16:07.422 "trtype": "TCP", 00:16:07.422 "adrfam": "IPv4", 00:16:07.422 "traddr": "10.0.0.2", 00:16:07.422 "trsvcid": "4420" 00:16:07.422 }, 00:16:07.422 "peer_address": { 00:16:07.422 "trtype": "TCP", 00:16:07.422 "adrfam": "IPv4", 00:16:07.422 "traddr": "10.0.0.1", 00:16:07.422 "trsvcid": "34140" 00:16:07.422 }, 00:16:07.422 "auth": { 00:16:07.422 "state": "completed", 00:16:07.422 "digest": "sha256", 00:16:07.422 "dhgroup": "ffdhe8192" 00:16:07.422 } 00:16:07.422 } 00:16:07.422 ]' 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.422 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.680 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.680 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.680 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.680 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.680 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.938 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:07.938 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:08.198 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.457 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.025 00:16:09.025 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.025 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.025 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.025 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.025 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.025 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.025 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.284 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.284 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.284 { 00:16:09.284 "cntlid": 43, 00:16:09.284 "qid": 0, 00:16:09.284 "state": "enabled", 00:16:09.284 "thread": "nvmf_tgt_poll_group_000", 00:16:09.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:09.284 "listen_address": { 00:16:09.284 "trtype": "TCP", 00:16:09.284 "adrfam": "IPv4", 00:16:09.284 "traddr": "10.0.0.2", 00:16:09.284 "trsvcid": "4420" 00:16:09.284 }, 00:16:09.284 "peer_address": { 00:16:09.284 "trtype": "TCP", 00:16:09.284 "adrfam": "IPv4", 00:16:09.284 "traddr": "10.0.0.1", 00:16:09.284 "trsvcid": "34184" 00:16:09.284 }, 00:16:09.284 "auth": { 00:16:09.284 "state": "completed", 00:16:09.284 "digest": "sha256", 00:16:09.284 "dhgroup": "ffdhe8192" 00:16:09.284 } 00:16:09.284 } 00:16:09.284 ]' 00:16:09.284 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.284 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.284 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.284 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.284 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.284 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.284 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.284 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.544 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:09.544 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:10.112 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.112 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:10.112 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.112 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.112 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.112 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.112 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.112 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.112 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.679 00:16:10.680 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.680 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.680 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.938 { 00:16:10.938 "cntlid": 45, 00:16:10.938 "qid": 0, 00:16:10.938 "state": "enabled", 00:16:10.938 "thread": "nvmf_tgt_poll_group_000", 00:16:10.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:10.938 "listen_address": { 00:16:10.938 "trtype": "TCP", 00:16:10.938 "adrfam": "IPv4", 00:16:10.938 "traddr": "10.0.0.2", 00:16:10.938 "trsvcid": "4420" 00:16:10.938 }, 00:16:10.938 "peer_address": { 00:16:10.938 "trtype": "TCP", 00:16:10.938 "adrfam": "IPv4", 00:16:10.938 "traddr": "10.0.0.1", 00:16:10.938 "trsvcid": "34218" 00:16:10.938 }, 00:16:10.938 "auth": { 00:16:10.938 "state": "completed", 00:16:10.938 "digest": "sha256", 00:16:10.938 "dhgroup": "ffdhe8192" 00:16:10.938 } 00:16:10.938 } 00:16:10.938 ]' 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.938 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.939 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.939 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.197 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:11.197 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:11.765 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.765 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:11.765 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.765 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.765 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.765 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.765 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.765 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.024 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.283 00:16:12.283 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.283 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.283 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.541 { 00:16:12.541 "cntlid": 47, 00:16:12.541 "qid": 0, 00:16:12.541 "state": "enabled", 00:16:12.541 "thread": "nvmf_tgt_poll_group_000", 00:16:12.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:12.541 "listen_address": { 00:16:12.541 "trtype": "TCP", 00:16:12.541 "adrfam": "IPv4", 00:16:12.541 "traddr": "10.0.0.2", 00:16:12.541 "trsvcid": "4420" 00:16:12.541 }, 00:16:12.541 "peer_address": { 00:16:12.541 "trtype": "TCP", 00:16:12.541 "adrfam": "IPv4", 00:16:12.541 "traddr": "10.0.0.1", 00:16:12.541 "trsvcid": "46868" 00:16:12.541 }, 00:16:12.541 "auth": { 00:16:12.541 "state": "completed", 00:16:12.541 "digest": "sha256", 00:16:12.541 "dhgroup": "ffdhe8192" 00:16:12.541 } 00:16:12.541 } 00:16:12.541 ]' 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.541 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.799 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.799 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.799 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.799 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:12.799 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:13.365 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.625 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.884 00:16:13.884 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.884 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.884 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.143 { 00:16:14.143 "cntlid": 49, 00:16:14.143 "qid": 0, 00:16:14.143 "state": "enabled", 00:16:14.143 "thread": "nvmf_tgt_poll_group_000", 00:16:14.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:14.143 "listen_address": { 00:16:14.143 "trtype": "TCP", 00:16:14.143 "adrfam": "IPv4", 00:16:14.143 "traddr": "10.0.0.2", 00:16:14.143 "trsvcid": "4420" 00:16:14.143 }, 00:16:14.143 "peer_address": { 00:16:14.143 "trtype": "TCP", 00:16:14.143 "adrfam": "IPv4", 00:16:14.143 "traddr": "10.0.0.1", 00:16:14.143 "trsvcid": "46902" 00:16:14.143 }, 00:16:14.143 "auth": { 00:16:14.143 "state": "completed", 00:16:14.143 "digest": "sha384", 00:16:14.143 "dhgroup": "null" 00:16:14.143 } 00:16:14.143 } 00:16:14.143 ]' 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.143 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.401 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:14.401 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.968 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.227 00:16:15.227 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.227 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.227 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.484 { 00:16:15.484 "cntlid": 51, 00:16:15.484 "qid": 0, 00:16:15.484 "state": "enabled", 00:16:15.484 "thread": "nvmf_tgt_poll_group_000", 00:16:15.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:15.484 "listen_address": { 00:16:15.484 "trtype": "TCP", 00:16:15.484 "adrfam": "IPv4", 00:16:15.484 "traddr": "10.0.0.2", 00:16:15.484 "trsvcid": "4420" 00:16:15.484 }, 00:16:15.484 "peer_address": { 00:16:15.484 "trtype": "TCP", 00:16:15.484 "adrfam": "IPv4", 00:16:15.484 "traddr": "10.0.0.1", 00:16:15.484 "trsvcid": "46940" 00:16:15.484 }, 00:16:15.484 "auth": { 00:16:15.484 "state": "completed", 00:16:15.484 "digest": "sha384", 00:16:15.484 "dhgroup": "null" 00:16:15.484 } 00:16:15.484 } 00:16:15.484 ]' 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:15.484 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.742 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.742 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.742 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.742 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:15.742 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:16.308 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.308 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:16.308 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.308 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.308 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.308 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.308 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.309 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.567 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.824 00:16:16.824 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.824 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.824 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.082 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.083 { 00:16:17.083 "cntlid": 53, 00:16:17.083 "qid": 0, 00:16:17.083 "state": "enabled", 00:16:17.083 "thread": "nvmf_tgt_poll_group_000", 00:16:17.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:17.083 "listen_address": { 00:16:17.083 "trtype": "TCP", 00:16:17.083 "adrfam": "IPv4", 00:16:17.083 "traddr": "10.0.0.2", 00:16:17.083 "trsvcid": "4420" 00:16:17.083 }, 00:16:17.083 "peer_address": { 00:16:17.083 "trtype": "TCP", 00:16:17.083 "adrfam": "IPv4", 00:16:17.083 "traddr": "10.0.0.1", 00:16:17.083 "trsvcid": "46972" 00:16:17.083 }, 00:16:17.083 "auth": { 00:16:17.083 "state": "completed", 00:16:17.083 "digest": "sha384", 00:16:17.083 "dhgroup": "null" 00:16:17.083 } 00:16:17.083 } 00:16:17.083 ]' 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.083 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.340 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:17.340 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:17.905 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.905 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:17.905 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.905 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.905 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.905 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.905 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.905 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.163 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.164 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.422 00:16:18.422 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.422 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.422 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.422 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.422 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.422 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.422 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.678 { 00:16:18.678 "cntlid": 55, 00:16:18.678 "qid": 0, 00:16:18.678 "state": "enabled", 00:16:18.678 "thread": "nvmf_tgt_poll_group_000", 00:16:18.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:18.678 "listen_address": { 00:16:18.678 "trtype": "TCP", 00:16:18.678 "adrfam": "IPv4", 00:16:18.678 "traddr": "10.0.0.2", 00:16:18.678 "trsvcid": "4420" 00:16:18.678 }, 00:16:18.678 "peer_address": { 00:16:18.678 "trtype": "TCP", 00:16:18.678 "adrfam": "IPv4", 00:16:18.678 "traddr": "10.0.0.1", 00:16:18.678 "trsvcid": "47006" 00:16:18.678 }, 00:16:18.678 "auth": { 00:16:18.678 "state": "completed", 00:16:18.678 "digest": "sha384", 00:16:18.678 "dhgroup": "null" 00:16:18.678 } 00:16:18.678 } 00:16:18.678 ]' 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.678 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.935 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:18.935 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.500 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.758 00:16:19.758 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.758 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.758 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.017 { 00:16:20.017 "cntlid": 57, 00:16:20.017 "qid": 0, 00:16:20.017 "state": "enabled", 00:16:20.017 "thread": "nvmf_tgt_poll_group_000", 00:16:20.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:20.017 "listen_address": { 00:16:20.017 "trtype": "TCP", 00:16:20.017 "adrfam": "IPv4", 00:16:20.017 "traddr": "10.0.0.2", 00:16:20.017 "trsvcid": "4420" 00:16:20.017 }, 00:16:20.017 "peer_address": { 00:16:20.017 "trtype": "TCP", 00:16:20.017 "adrfam": "IPv4", 00:16:20.017 "traddr": "10.0.0.1", 00:16:20.017 "trsvcid": "47016" 00:16:20.017 }, 00:16:20.017 "auth": { 00:16:20.017 "state": "completed", 00:16:20.017 "digest": "sha384", 00:16:20.017 "dhgroup": "ffdhe2048" 00:16:20.017 } 00:16:20.017 } 00:16:20.017 ]' 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.017 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.274 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.274 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.274 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.275 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:20.275 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:20.840 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.840 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:20.840 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.840 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.840 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.840 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.840 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:20.841 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:21.098 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:21.098 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.099 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.357 00:16:21.357 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.357 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.357 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.616 { 00:16:21.616 "cntlid": 59, 00:16:21.616 "qid": 0, 00:16:21.616 "state": "enabled", 00:16:21.616 "thread": "nvmf_tgt_poll_group_000", 00:16:21.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:21.616 "listen_address": { 00:16:21.616 "trtype": "TCP", 00:16:21.616 "adrfam": "IPv4", 00:16:21.616 "traddr": "10.0.0.2", 00:16:21.616 "trsvcid": "4420" 00:16:21.616 }, 00:16:21.616 "peer_address": { 00:16:21.616 "trtype": "TCP", 00:16:21.616 "adrfam": "IPv4", 00:16:21.616 "traddr": "10.0.0.1", 00:16:21.616 "trsvcid": "59334" 00:16:21.616 }, 00:16:21.616 "auth": { 00:16:21.616 "state": "completed", 00:16:21.616 "digest": "sha384", 00:16:21.616 "dhgroup": "ffdhe2048" 00:16:21.616 } 00:16:21.616 } 00:16:21.616 ]' 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.616 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.877 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:21.877 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.443 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.700 00:16:22.700 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.700 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.700 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.958 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.958 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.958 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.958 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.958 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.958 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.958 { 00:16:22.958 "cntlid": 61, 00:16:22.958 "qid": 0, 00:16:22.958 "state": "enabled", 00:16:22.958 "thread": "nvmf_tgt_poll_group_000", 00:16:22.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:22.958 "listen_address": { 00:16:22.958 "trtype": "TCP", 00:16:22.958 "adrfam": "IPv4", 00:16:22.958 "traddr": "10.0.0.2", 00:16:22.958 "trsvcid": "4420" 00:16:22.958 }, 00:16:22.958 "peer_address": { 00:16:22.958 "trtype": "TCP", 00:16:22.958 "adrfam": "IPv4", 00:16:22.958 "traddr": "10.0.0.1", 00:16:22.958 "trsvcid": "59370" 00:16:22.958 }, 00:16:22.958 "auth": { 00:16:22.958 "state": "completed", 00:16:22.958 "digest": "sha384", 00:16:22.958 "dhgroup": "ffdhe2048" 00:16:22.958 } 00:16:22.958 } 00:16:22.958 ]' 00:16:22.958 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.958 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.958 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.959 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.959 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.217 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.217 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.217 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.217 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:23.217 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:23.891 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.891 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:23.891 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.891 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.891 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.891 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.891 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.891 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.187 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.187 00:16:24.187 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.187 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.187 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.446 { 00:16:24.446 "cntlid": 63, 00:16:24.446 "qid": 0, 00:16:24.446 "state": "enabled", 00:16:24.446 "thread": "nvmf_tgt_poll_group_000", 00:16:24.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:24.446 "listen_address": { 00:16:24.446 "trtype": "TCP", 00:16:24.446 "adrfam": "IPv4", 00:16:24.446 "traddr": "10.0.0.2", 00:16:24.446 "trsvcid": "4420" 00:16:24.446 }, 00:16:24.446 "peer_address": { 00:16:24.446 "trtype": "TCP", 00:16:24.446 "adrfam": "IPv4", 00:16:24.446 "traddr": "10.0.0.1", 00:16:24.446 "trsvcid": "59400" 00:16:24.446 }, 00:16:24.446 "auth": { 00:16:24.446 "state": "completed", 00:16:24.446 "digest": "sha384", 00:16:24.446 "dhgroup": "ffdhe2048" 00:16:24.446 } 00:16:24.446 } 00:16:24.446 ]' 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.446 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.706 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.706 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.706 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.706 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:24.706 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:25.273 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.273 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:25.273 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.273 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.273 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.273 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.273 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.273 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:25.273 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.532 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.791 00:16:25.791 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.791 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.791 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.051 { 00:16:26.051 "cntlid": 65, 00:16:26.051 "qid": 0, 00:16:26.051 "state": "enabled", 00:16:26.051 "thread": "nvmf_tgt_poll_group_000", 00:16:26.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:26.051 "listen_address": { 00:16:26.051 "trtype": "TCP", 00:16:26.051 "adrfam": "IPv4", 00:16:26.051 "traddr": "10.0.0.2", 00:16:26.051 "trsvcid": "4420" 00:16:26.051 }, 00:16:26.051 "peer_address": { 00:16:26.051 "trtype": "TCP", 00:16:26.051 "adrfam": "IPv4", 00:16:26.051 "traddr": "10.0.0.1", 00:16:26.051 "trsvcid": "59418" 00:16:26.051 }, 00:16:26.051 "auth": { 00:16:26.051 "state": "completed", 00:16:26.051 "digest": "sha384", 00:16:26.051 "dhgroup": "ffdhe3072" 00:16:26.051 } 00:16:26.051 } 00:16:26.051 ]' 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.051 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.310 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:26.310 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:26.876 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.876 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:26.876 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.876 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.876 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.876 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.876 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.877 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.134 00:16:27.134 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.134 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.134 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.392 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.392 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.392 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.392 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.392 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.392 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.392 { 00:16:27.392 "cntlid": 67, 00:16:27.392 "qid": 0, 00:16:27.392 "state": "enabled", 00:16:27.392 "thread": "nvmf_tgt_poll_group_000", 00:16:27.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:27.392 "listen_address": { 00:16:27.392 "trtype": "TCP", 00:16:27.392 "adrfam": "IPv4", 00:16:27.392 "traddr": "10.0.0.2", 00:16:27.392 "trsvcid": "4420" 00:16:27.392 }, 00:16:27.392 "peer_address": { 00:16:27.392 "trtype": "TCP", 00:16:27.392 "adrfam": "IPv4", 00:16:27.392 "traddr": "10.0.0.1", 00:16:27.392 "trsvcid": "59448" 00:16:27.392 }, 00:16:27.392 "auth": { 00:16:27.392 "state": "completed", 00:16:27.392 "digest": "sha384", 00:16:27.392 "dhgroup": "ffdhe3072" 00:16:27.392 } 00:16:27.392 } 00:16:27.392 ]' 00:16:27.392 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.392 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.392 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.650 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.651 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.651 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.651 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.651 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.651 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:27.651 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:28.218 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.218 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:28.218 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.218 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.218 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.218 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.218 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:28.218 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.476 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.734 00:16:28.734 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.734 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.734 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.991 { 00:16:28.991 "cntlid": 69, 00:16:28.991 "qid": 0, 00:16:28.991 "state": "enabled", 00:16:28.991 "thread": "nvmf_tgt_poll_group_000", 00:16:28.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:28.991 "listen_address": { 00:16:28.991 "trtype": "TCP", 00:16:28.991 "adrfam": "IPv4", 00:16:28.991 "traddr": "10.0.0.2", 00:16:28.991 "trsvcid": "4420" 00:16:28.991 }, 00:16:28.991 "peer_address": { 00:16:28.991 "trtype": "TCP", 00:16:28.991 "adrfam": "IPv4", 00:16:28.991 "traddr": "10.0.0.1", 00:16:28.991 "trsvcid": "59480" 00:16:28.991 }, 00:16:28.991 "auth": { 00:16:28.991 "state": "completed", 00:16:28.991 "digest": "sha384", 00:16:28.991 "dhgroup": "ffdhe3072" 00:16:28.991 } 00:16:28.991 } 00:16:28.991 ]' 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.991 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.249 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:29.249 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:29.815 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.815 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:29.815 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.815 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.815 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.815 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.815 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:29.815 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.074 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.333 00:16:30.333 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.333 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.333 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.591 { 00:16:30.591 "cntlid": 71, 00:16:30.591 "qid": 0, 00:16:30.591 "state": "enabled", 00:16:30.591 "thread": "nvmf_tgt_poll_group_000", 00:16:30.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:30.591 "listen_address": { 00:16:30.591 "trtype": "TCP", 00:16:30.591 "adrfam": "IPv4", 00:16:30.591 "traddr": "10.0.0.2", 00:16:30.591 "trsvcid": "4420" 00:16:30.591 }, 00:16:30.591 "peer_address": { 00:16:30.591 "trtype": "TCP", 00:16:30.591 "adrfam": "IPv4", 00:16:30.591 "traddr": "10.0.0.1", 00:16:30.591 "trsvcid": "59496" 00:16:30.591 }, 00:16:30.591 "auth": { 00:16:30.591 "state": "completed", 00:16:30.591 "digest": "sha384", 00:16:30.591 "dhgroup": "ffdhe3072" 00:16:30.591 } 00:16:30.591 } 00:16:30.591 ]' 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.591 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.849 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:30.849 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:31.414 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.414 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:31.414 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.414 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.414 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.414 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.414 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.414 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.414 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.672 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:31.672 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.672 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.672 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.672 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.672 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.673 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.673 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.673 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.673 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.673 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.673 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.673 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.931 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.931 { 00:16:31.931 "cntlid": 73, 00:16:31.931 "qid": 0, 00:16:31.931 "state": "enabled", 00:16:31.931 "thread": "nvmf_tgt_poll_group_000", 00:16:31.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:31.931 "listen_address": { 00:16:31.931 "trtype": "TCP", 00:16:31.931 "adrfam": "IPv4", 00:16:31.931 "traddr": "10.0.0.2", 00:16:31.931 "trsvcid": "4420" 00:16:31.931 }, 00:16:31.931 "peer_address": { 00:16:31.931 "trtype": "TCP", 00:16:31.931 "adrfam": "IPv4", 00:16:31.931 "traddr": "10.0.0.1", 00:16:31.931 "trsvcid": "41522" 00:16:31.931 }, 00:16:31.931 "auth": { 00:16:31.931 "state": "completed", 00:16:31.931 "digest": "sha384", 00:16:31.931 "dhgroup": "ffdhe4096" 00:16:31.931 } 00:16:31.931 } 00:16:31.931 ]' 00:16:31.931 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.189 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.189 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.189 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.189 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.189 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.189 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.189 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.446 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:32.447 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.012 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.272 00:16:33.272 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.272 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.272 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.531 { 00:16:33.531 "cntlid": 75, 00:16:33.531 "qid": 0, 00:16:33.531 "state": "enabled", 00:16:33.531 "thread": "nvmf_tgt_poll_group_000", 00:16:33.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:33.531 "listen_address": { 00:16:33.531 "trtype": "TCP", 00:16:33.531 "adrfam": "IPv4", 00:16:33.531 "traddr": "10.0.0.2", 00:16:33.531 "trsvcid": "4420" 00:16:33.531 }, 00:16:33.531 "peer_address": { 00:16:33.531 "trtype": "TCP", 00:16:33.531 "adrfam": "IPv4", 00:16:33.531 "traddr": "10.0.0.1", 00:16:33.531 "trsvcid": "41558" 00:16:33.531 }, 00:16:33.531 "auth": { 00:16:33.531 "state": "completed", 00:16:33.531 "digest": "sha384", 00:16:33.531 "dhgroup": "ffdhe4096" 00:16:33.531 } 00:16:33.531 } 00:16:33.531 ]' 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.531 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.788 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.788 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.788 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.788 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:33.788 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:34.355 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.355 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:34.355 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.355 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.355 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.355 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.355 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.355 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.613 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.872 00:16:34.872 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.872 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.872 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.131 { 00:16:35.131 "cntlid": 77, 00:16:35.131 "qid": 0, 00:16:35.131 "state": "enabled", 00:16:35.131 "thread": "nvmf_tgt_poll_group_000", 00:16:35.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:35.131 "listen_address": { 00:16:35.131 "trtype": "TCP", 00:16:35.131 "adrfam": "IPv4", 00:16:35.131 "traddr": "10.0.0.2", 00:16:35.131 "trsvcid": "4420" 00:16:35.131 }, 00:16:35.131 "peer_address": { 00:16:35.131 "trtype": "TCP", 00:16:35.131 "adrfam": "IPv4", 00:16:35.131 "traddr": "10.0.0.1", 00:16:35.131 "trsvcid": "41586" 00:16:35.131 }, 00:16:35.131 "auth": { 00:16:35.131 "state": "completed", 00:16:35.131 "digest": "sha384", 00:16:35.131 "dhgroup": "ffdhe4096" 00:16:35.131 } 00:16:35.131 } 00:16:35.131 ]' 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.131 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.131 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.131 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.131 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.390 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:35.390 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:35.956 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.956 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:35.956 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.956 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.956 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.956 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.956 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.956 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.215 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.473 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.473 { 00:16:36.473 "cntlid": 79, 00:16:36.473 "qid": 0, 00:16:36.473 "state": "enabled", 00:16:36.473 "thread": "nvmf_tgt_poll_group_000", 00:16:36.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:36.473 "listen_address": { 00:16:36.473 "trtype": "TCP", 00:16:36.473 "adrfam": "IPv4", 00:16:36.473 "traddr": "10.0.0.2", 00:16:36.473 "trsvcid": "4420" 00:16:36.473 }, 00:16:36.473 "peer_address": { 00:16:36.473 "trtype": "TCP", 00:16:36.473 "adrfam": "IPv4", 00:16:36.473 "traddr": "10.0.0.1", 00:16:36.473 "trsvcid": "41602" 00:16:36.473 }, 00:16:36.473 "auth": { 00:16:36.473 "state": "completed", 00:16:36.473 "digest": "sha384", 00:16:36.473 "dhgroup": "ffdhe4096" 00:16:36.473 } 00:16:36.473 } 00:16:36.473 ]' 00:16:36.473 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.731 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.731 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.731 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.731 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.731 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.731 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.731 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.989 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:36.989 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.557 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.558 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.558 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.123 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.123 { 00:16:38.123 "cntlid": 81, 00:16:38.123 "qid": 0, 00:16:38.123 "state": "enabled", 00:16:38.123 "thread": "nvmf_tgt_poll_group_000", 00:16:38.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:38.123 "listen_address": { 00:16:38.123 "trtype": "TCP", 00:16:38.123 "adrfam": "IPv4", 00:16:38.123 "traddr": "10.0.0.2", 00:16:38.123 "trsvcid": "4420" 00:16:38.123 }, 00:16:38.123 "peer_address": { 00:16:38.123 "trtype": "TCP", 00:16:38.123 "adrfam": "IPv4", 00:16:38.123 "traddr": "10.0.0.1", 00:16:38.123 "trsvcid": "41630" 00:16:38.123 }, 00:16:38.123 "auth": { 00:16:38.123 "state": "completed", 00:16:38.123 "digest": "sha384", 00:16:38.123 "dhgroup": "ffdhe6144" 00:16:38.123 } 00:16:38.123 } 00:16:38.123 ]' 00:16:38.123 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.123 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.123 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.381 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.381 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.381 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.381 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.381 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.381 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:38.381 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:38.947 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.947 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:38.947 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.947 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.947 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.947 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.947 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:38.947 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.204 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.205 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.205 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.205 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.462 00:16:39.462 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.462 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.462 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.720 { 00:16:39.720 "cntlid": 83, 00:16:39.720 "qid": 0, 00:16:39.720 "state": "enabled", 00:16:39.720 "thread": "nvmf_tgt_poll_group_000", 00:16:39.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:39.720 "listen_address": { 00:16:39.720 "trtype": "TCP", 00:16:39.720 "adrfam": "IPv4", 00:16:39.720 "traddr": "10.0.0.2", 00:16:39.720 "trsvcid": "4420" 00:16:39.720 }, 00:16:39.720 "peer_address": { 00:16:39.720 "trtype": "TCP", 00:16:39.720 "adrfam": "IPv4", 00:16:39.720 "traddr": "10.0.0.1", 00:16:39.720 "trsvcid": "41664" 00:16:39.720 }, 00:16:39.720 "auth": { 00:16:39.720 "state": "completed", 00:16:39.720 "digest": "sha384", 00:16:39.720 "dhgroup": "ffdhe6144" 00:16:39.720 } 00:16:39.720 } 00:16:39.720 ]' 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.720 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.721 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.978 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.978 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.978 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.978 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:39.978 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:40.543 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.543 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:40.543 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.543 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.543 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.543 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.543 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.543 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.801 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.058 00:16:41.058 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.058 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.058 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.317 { 00:16:41.317 "cntlid": 85, 00:16:41.317 "qid": 0, 00:16:41.317 "state": "enabled", 00:16:41.317 "thread": "nvmf_tgt_poll_group_000", 00:16:41.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:41.317 "listen_address": { 00:16:41.317 "trtype": "TCP", 00:16:41.317 "adrfam": "IPv4", 00:16:41.317 "traddr": "10.0.0.2", 00:16:41.317 "trsvcid": "4420" 00:16:41.317 }, 00:16:41.317 "peer_address": { 00:16:41.317 "trtype": "TCP", 00:16:41.317 "adrfam": "IPv4", 00:16:41.317 "traddr": "10.0.0.1", 00:16:41.317 "trsvcid": "60302" 00:16:41.317 }, 00:16:41.317 "auth": { 00:16:41.317 "state": "completed", 00:16:41.317 "digest": "sha384", 00:16:41.317 "dhgroup": "ffdhe6144" 00:16:41.317 } 00:16:41.317 } 00:16:41.317 ]' 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.317 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.575 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.575 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.575 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.575 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:41.575 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:42.138 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.138 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:42.138 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.138 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.138 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.138 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.138 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:42.138 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.395 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.652 00:16:42.652 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.652 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.652 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.909 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.909 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.909 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.910 { 00:16:42.910 "cntlid": 87, 00:16:42.910 "qid": 0, 00:16:42.910 "state": "enabled", 00:16:42.910 "thread": "nvmf_tgt_poll_group_000", 00:16:42.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:42.910 "listen_address": { 00:16:42.910 "trtype": "TCP", 00:16:42.910 "adrfam": "IPv4", 00:16:42.910 "traddr": "10.0.0.2", 00:16:42.910 "trsvcid": "4420" 00:16:42.910 }, 00:16:42.910 "peer_address": { 00:16:42.910 "trtype": "TCP", 00:16:42.910 "adrfam": "IPv4", 00:16:42.910 "traddr": "10.0.0.1", 00:16:42.910 "trsvcid": "60328" 00:16:42.910 }, 00:16:42.910 "auth": { 00:16:42.910 "state": "completed", 00:16:42.910 "digest": "sha384", 00:16:42.910 "dhgroup": "ffdhe6144" 00:16:42.910 } 00:16:42.910 } 00:16:42.910 ]' 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.910 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.167 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:43.167 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:43.731 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.731 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:43.731 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.731 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.731 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.731 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.731 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.731 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:43.731 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.003 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.569 00:16:44.569 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.569 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.569 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.569 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.569 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.569 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.569 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.569 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.569 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.569 { 00:16:44.569 "cntlid": 89, 00:16:44.569 "qid": 0, 00:16:44.569 "state": "enabled", 00:16:44.569 "thread": "nvmf_tgt_poll_group_000", 00:16:44.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:44.569 "listen_address": { 00:16:44.569 "trtype": "TCP", 00:16:44.569 "adrfam": "IPv4", 00:16:44.569 "traddr": "10.0.0.2", 00:16:44.569 "trsvcid": "4420" 00:16:44.569 }, 00:16:44.569 "peer_address": { 00:16:44.569 "trtype": "TCP", 00:16:44.570 "adrfam": "IPv4", 00:16:44.570 "traddr": "10.0.0.1", 00:16:44.570 "trsvcid": "60366" 00:16:44.570 }, 00:16:44.570 "auth": { 00:16:44.570 "state": "completed", 00:16:44.570 "digest": "sha384", 00:16:44.570 "dhgroup": "ffdhe8192" 00:16:44.570 } 00:16:44.570 } 00:16:44.570 ]' 00:16:44.570 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.570 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.570 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.570 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.570 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.828 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.828 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.828 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.828 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:44.828 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:45.395 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.395 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:45.395 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.395 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.395 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.395 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.395 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.395 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.653 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.221 00:16:46.221 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.221 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.221 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.221 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.221 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.221 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.221 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.221 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.221 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.221 { 00:16:46.221 "cntlid": 91, 00:16:46.221 "qid": 0, 00:16:46.221 "state": "enabled", 00:16:46.221 "thread": "nvmf_tgt_poll_group_000", 00:16:46.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:46.221 "listen_address": { 00:16:46.221 "trtype": "TCP", 00:16:46.221 "adrfam": "IPv4", 00:16:46.221 "traddr": "10.0.0.2", 00:16:46.221 "trsvcid": "4420" 00:16:46.221 }, 00:16:46.221 "peer_address": { 00:16:46.221 "trtype": "TCP", 00:16:46.221 "adrfam": "IPv4", 00:16:46.221 "traddr": "10.0.0.1", 00:16:46.221 "trsvcid": "60394" 00:16:46.221 }, 00:16:46.221 "auth": { 00:16:46.221 "state": "completed", 00:16:46.221 "digest": "sha384", 00:16:46.221 "dhgroup": "ffdhe8192" 00:16:46.221 } 00:16:46.221 } 00:16:46.221 ]' 00:16:46.221 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.221 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.221 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.479 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.479 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.479 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.479 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.479 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.736 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:46.736 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:47.300 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.300 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:47.300 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.300 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.300 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.300 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.300 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.300 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.300 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.870 00:16:47.870 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.870 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.870 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.128 { 00:16:48.128 "cntlid": 93, 00:16:48.128 "qid": 0, 00:16:48.128 "state": "enabled", 00:16:48.128 "thread": "nvmf_tgt_poll_group_000", 00:16:48.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:48.128 "listen_address": { 00:16:48.128 "trtype": "TCP", 00:16:48.128 "adrfam": "IPv4", 00:16:48.128 "traddr": "10.0.0.2", 00:16:48.128 "trsvcid": "4420" 00:16:48.128 }, 00:16:48.128 "peer_address": { 00:16:48.128 "trtype": "TCP", 00:16:48.128 "adrfam": "IPv4", 00:16:48.128 "traddr": "10.0.0.1", 00:16:48.128 "trsvcid": "60416" 00:16:48.128 }, 00:16:48.128 "auth": { 00:16:48.128 "state": "completed", 00:16:48.128 "digest": "sha384", 00:16:48.128 "dhgroup": "ffdhe8192" 00:16:48.128 } 00:16:48.128 } 00:16:48.128 ]' 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.128 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.386 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:48.386 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.951 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.516 00:16:49.516 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.516 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.516 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.774 { 00:16:49.774 "cntlid": 95, 00:16:49.774 "qid": 0, 00:16:49.774 "state": "enabled", 00:16:49.774 "thread": "nvmf_tgt_poll_group_000", 00:16:49.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:49.774 "listen_address": { 00:16:49.774 "trtype": "TCP", 00:16:49.774 "adrfam": "IPv4", 00:16:49.774 "traddr": "10.0.0.2", 00:16:49.774 "trsvcid": "4420" 00:16:49.774 }, 00:16:49.774 "peer_address": { 00:16:49.774 "trtype": "TCP", 00:16:49.774 "adrfam": "IPv4", 00:16:49.774 "traddr": "10.0.0.1", 00:16:49.774 "trsvcid": "60452" 00:16:49.774 }, 00:16:49.774 "auth": { 00:16:49.774 "state": "completed", 00:16:49.774 "digest": "sha384", 00:16:49.774 "dhgroup": "ffdhe8192" 00:16:49.774 } 00:16:49.774 } 00:16:49.774 ]' 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.774 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.031 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:50.031 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.598 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.858 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.858 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.858 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.858 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.858 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.858 00:16:50.858 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.858 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.858 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.115 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.115 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.115 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.115 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.115 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.115 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.115 { 00:16:51.115 "cntlid": 97, 00:16:51.115 "qid": 0, 00:16:51.115 "state": "enabled", 00:16:51.115 "thread": "nvmf_tgt_poll_group_000", 00:16:51.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:51.115 "listen_address": { 00:16:51.115 "trtype": "TCP", 00:16:51.115 "adrfam": "IPv4", 00:16:51.115 "traddr": "10.0.0.2", 00:16:51.115 "trsvcid": "4420" 00:16:51.115 }, 00:16:51.115 "peer_address": { 00:16:51.115 "trtype": "TCP", 00:16:51.115 "adrfam": "IPv4", 00:16:51.115 "traddr": "10.0.0.1", 00:16:51.115 "trsvcid": "56950" 00:16:51.115 }, 00:16:51.115 "auth": { 00:16:51.115 "state": "completed", 00:16:51.115 "digest": "sha512", 00:16:51.115 "dhgroup": "null" 00:16:51.115 } 00:16:51.115 } 00:16:51.115 ]' 00:16:51.115 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.115 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.115 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.115 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:51.115 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.373 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.373 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.373 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.373 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:51.373 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:51.939 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.939 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:51.939 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.939 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.939 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.939 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.939 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:51.939 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:52.197 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:52.197 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.197 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.197 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:52.197 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.197 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.197 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.197 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.197 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.197 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.197 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.197 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.197 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.454 00:16:52.454 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.454 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.454 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.712 { 00:16:52.712 "cntlid": 99, 00:16:52.712 "qid": 0, 00:16:52.712 "state": "enabled", 00:16:52.712 "thread": "nvmf_tgt_poll_group_000", 00:16:52.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:52.712 "listen_address": { 00:16:52.712 "trtype": "TCP", 00:16:52.712 "adrfam": "IPv4", 00:16:52.712 "traddr": "10.0.0.2", 00:16:52.712 "trsvcid": "4420" 00:16:52.712 }, 00:16:52.712 "peer_address": { 00:16:52.712 "trtype": "TCP", 00:16:52.712 "adrfam": "IPv4", 00:16:52.712 "traddr": "10.0.0.1", 00:16:52.712 "trsvcid": "56988" 00:16:52.712 }, 00:16:52.712 "auth": { 00:16:52.712 "state": "completed", 00:16:52.712 "digest": "sha512", 00:16:52.712 "dhgroup": "null" 00:16:52.712 } 00:16:52.712 } 00:16:52.712 ]' 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.712 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.970 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:52.970 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:53.537 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.537 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:53.537 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.537 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.537 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.537 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.537 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:53.537 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.796 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.055 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.055 { 00:16:54.055 "cntlid": 101, 00:16:54.055 "qid": 0, 00:16:54.055 "state": "enabled", 00:16:54.055 "thread": "nvmf_tgt_poll_group_000", 00:16:54.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:54.055 "listen_address": { 00:16:54.055 "trtype": "TCP", 00:16:54.055 "adrfam": "IPv4", 00:16:54.055 "traddr": "10.0.0.2", 00:16:54.055 "trsvcid": "4420" 00:16:54.055 }, 00:16:54.055 "peer_address": { 00:16:54.055 "trtype": "TCP", 00:16:54.055 "adrfam": "IPv4", 00:16:54.055 "traddr": "10.0.0.1", 00:16:54.055 "trsvcid": "57018" 00:16:54.055 }, 00:16:54.055 "auth": { 00:16:54.055 "state": "completed", 00:16:54.055 "digest": "sha512", 00:16:54.055 "dhgroup": "null" 00:16:54.055 } 00:16:54.055 } 00:16:54.055 ]' 00:16:54.055 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.314 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.314 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.314 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:54.314 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.314 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.314 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.314 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.573 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:54.573 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.140 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.140 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.140 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.140 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.140 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.399 00:16:55.399 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.399 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.399 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.658 { 00:16:55.658 "cntlid": 103, 00:16:55.658 "qid": 0, 00:16:55.658 "state": "enabled", 00:16:55.658 "thread": "nvmf_tgt_poll_group_000", 00:16:55.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:55.658 "listen_address": { 00:16:55.658 "trtype": "TCP", 00:16:55.658 "adrfam": "IPv4", 00:16:55.658 "traddr": "10.0.0.2", 00:16:55.658 "trsvcid": "4420" 00:16:55.658 }, 00:16:55.658 "peer_address": { 00:16:55.658 "trtype": "TCP", 00:16:55.658 "adrfam": "IPv4", 00:16:55.658 "traddr": "10.0.0.1", 00:16:55.658 "trsvcid": "57048" 00:16:55.658 }, 00:16:55.658 "auth": { 00:16:55.658 "state": "completed", 00:16:55.658 "digest": "sha512", 00:16:55.658 "dhgroup": "null" 00:16:55.658 } 00:16:55.658 } 00:16:55.658 ]' 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.658 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.917 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:55.917 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:16:56.485 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.485 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:56.485 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.485 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.485 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.485 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.485 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.485 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.485 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.744 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.003 00:16:57.003 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.003 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.003 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.263 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.263 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.263 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.263 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.263 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.263 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.263 { 00:16:57.263 "cntlid": 105, 00:16:57.263 "qid": 0, 00:16:57.263 "state": "enabled", 00:16:57.263 "thread": "nvmf_tgt_poll_group_000", 00:16:57.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:57.263 "listen_address": { 00:16:57.263 "trtype": "TCP", 00:16:57.263 "adrfam": "IPv4", 00:16:57.263 "traddr": "10.0.0.2", 00:16:57.263 "trsvcid": "4420" 00:16:57.263 }, 00:16:57.263 "peer_address": { 00:16:57.263 "trtype": "TCP", 00:16:57.263 "adrfam": "IPv4", 00:16:57.263 "traddr": "10.0.0.1", 00:16:57.263 "trsvcid": "57062" 00:16:57.263 }, 00:16:57.263 "auth": { 00:16:57.263 "state": "completed", 00:16:57.263 "digest": "sha512", 00:16:57.263 "dhgroup": "ffdhe2048" 00:16:57.263 } 00:16:57.263 } 00:16:57.263 ]' 00:16:57.263 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.263 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.263 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.263 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.263 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.263 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.263 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.263 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.522 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:57.522 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:16:58.090 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.090 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:58.090 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.090 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.090 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.090 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.090 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:58.090 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:58.090 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:58.090 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.090 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.090 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:58.090 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.090 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.090 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.090 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.090 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.349 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.349 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.349 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.349 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.349 00:16:58.349 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.349 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.349 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.607 { 00:16:58.607 "cntlid": 107, 00:16:58.607 "qid": 0, 00:16:58.607 "state": "enabled", 00:16:58.607 "thread": "nvmf_tgt_poll_group_000", 00:16:58.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:58.607 "listen_address": { 00:16:58.607 "trtype": "TCP", 00:16:58.607 "adrfam": "IPv4", 00:16:58.607 "traddr": "10.0.0.2", 00:16:58.607 "trsvcid": "4420" 00:16:58.607 }, 00:16:58.607 "peer_address": { 00:16:58.607 "trtype": "TCP", 00:16:58.607 "adrfam": "IPv4", 00:16:58.607 "traddr": "10.0.0.1", 00:16:58.607 "trsvcid": "57110" 00:16:58.607 }, 00:16:58.607 "auth": { 00:16:58.607 "state": "completed", 00:16:58.607 "digest": "sha512", 00:16:58.607 "dhgroup": "ffdhe2048" 00:16:58.607 } 00:16:58.607 } 00:16:58.607 ]' 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.607 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.866 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.866 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.866 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.866 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:58.866 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:16:59.436 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.436 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:59.436 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.436 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.436 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.436 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.436 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.436 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.695 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.954 00:16:59.954 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.954 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.954 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.212 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.212 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.212 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.212 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.212 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.212 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.212 { 00:17:00.212 "cntlid": 109, 00:17:00.212 "qid": 0, 00:17:00.212 "state": "enabled", 00:17:00.212 "thread": "nvmf_tgt_poll_group_000", 00:17:00.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:00.212 "listen_address": { 00:17:00.212 "trtype": "TCP", 00:17:00.212 "adrfam": "IPv4", 00:17:00.212 "traddr": "10.0.0.2", 00:17:00.212 "trsvcid": "4420" 00:17:00.212 }, 00:17:00.212 "peer_address": { 00:17:00.212 "trtype": "TCP", 00:17:00.212 "adrfam": "IPv4", 00:17:00.212 "traddr": "10.0.0.1", 00:17:00.212 "trsvcid": "57144" 00:17:00.212 }, 00:17:00.212 "auth": { 00:17:00.212 "state": "completed", 00:17:00.212 "digest": "sha512", 00:17:00.212 "dhgroup": "ffdhe2048" 00:17:00.212 } 00:17:00.212 } 00:17:00.212 ]' 00:17:00.212 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.212 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.212 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.213 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.213 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.213 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.213 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.213 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.471 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:00.471 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.041 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.346 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.346 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.346 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.346 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.346 00:17:01.346 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.346 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.346 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.658 { 00:17:01.658 "cntlid": 111, 00:17:01.658 "qid": 0, 00:17:01.658 "state": "enabled", 00:17:01.658 "thread": "nvmf_tgt_poll_group_000", 00:17:01.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:01.658 "listen_address": { 00:17:01.658 "trtype": "TCP", 00:17:01.658 "adrfam": "IPv4", 00:17:01.658 "traddr": "10.0.0.2", 00:17:01.658 "trsvcid": "4420" 00:17:01.658 }, 00:17:01.658 "peer_address": { 00:17:01.658 "trtype": "TCP", 00:17:01.658 "adrfam": "IPv4", 00:17:01.658 "traddr": "10.0.0.1", 00:17:01.658 "trsvcid": "58882" 00:17:01.658 }, 00:17:01.658 "auth": { 00:17:01.658 "state": "completed", 00:17:01.658 "digest": "sha512", 00:17:01.658 "dhgroup": "ffdhe2048" 00:17:01.658 } 00:17:01.658 } 00:17:01.658 ]' 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.658 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.933 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:01.933 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:02.498 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.498 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:02.498 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.498 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.498 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.498 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.499 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.499 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.499 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.499 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:02.499 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.757 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.757 00:17:03.015 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.015 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.015 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.015 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.015 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.015 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.015 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.015 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.015 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.015 { 00:17:03.015 "cntlid": 113, 00:17:03.015 "qid": 0, 00:17:03.015 "state": "enabled", 00:17:03.015 "thread": "nvmf_tgt_poll_group_000", 00:17:03.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:03.015 "listen_address": { 00:17:03.015 "trtype": "TCP", 00:17:03.015 "adrfam": "IPv4", 00:17:03.015 "traddr": "10.0.0.2", 00:17:03.015 "trsvcid": "4420" 00:17:03.015 }, 00:17:03.015 "peer_address": { 00:17:03.015 "trtype": "TCP", 00:17:03.015 "adrfam": "IPv4", 00:17:03.016 "traddr": "10.0.0.1", 00:17:03.016 "trsvcid": "58920" 00:17:03.016 }, 00:17:03.016 "auth": { 00:17:03.016 "state": "completed", 00:17:03.016 "digest": "sha512", 00:17:03.016 "dhgroup": "ffdhe3072" 00:17:03.016 } 00:17:03.016 } 00:17:03.016 ]' 00:17:03.016 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.016 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.016 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.281 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.281 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.281 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.281 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.281 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.281 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:03.281 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:03.848 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.848 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:03.848 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.848 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.848 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.848 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.848 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.848 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.108 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.367 00:17:04.367 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.367 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.367 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.625 { 00:17:04.625 "cntlid": 115, 00:17:04.625 "qid": 0, 00:17:04.625 "state": "enabled", 00:17:04.625 "thread": "nvmf_tgt_poll_group_000", 00:17:04.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:04.625 "listen_address": { 00:17:04.625 "trtype": "TCP", 00:17:04.625 "adrfam": "IPv4", 00:17:04.625 "traddr": "10.0.0.2", 00:17:04.625 "trsvcid": "4420" 00:17:04.625 }, 00:17:04.625 "peer_address": { 00:17:04.625 "trtype": "TCP", 00:17:04.625 "adrfam": "IPv4", 00:17:04.625 "traddr": "10.0.0.1", 00:17:04.625 "trsvcid": "58946" 00:17:04.625 }, 00:17:04.625 "auth": { 00:17:04.625 "state": "completed", 00:17:04.625 "digest": "sha512", 00:17:04.625 "dhgroup": "ffdhe3072" 00:17:04.625 } 00:17:04.625 } 00:17:04.625 ]' 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.625 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.884 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:17:04.884 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:17:05.452 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.452 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:05.452 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.452 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.452 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.452 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.452 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.452 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.712 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.971 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.971 { 00:17:05.971 "cntlid": 117, 00:17:05.971 "qid": 0, 00:17:05.971 "state": "enabled", 00:17:05.971 "thread": "nvmf_tgt_poll_group_000", 00:17:05.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:05.971 "listen_address": { 00:17:05.971 "trtype": "TCP", 00:17:05.971 "adrfam": "IPv4", 00:17:05.971 "traddr": "10.0.0.2", 00:17:05.971 "trsvcid": "4420" 00:17:05.971 }, 00:17:05.971 "peer_address": { 00:17:05.971 "trtype": "TCP", 00:17:05.971 "adrfam": "IPv4", 00:17:05.971 "traddr": "10.0.0.1", 00:17:05.971 "trsvcid": "58974" 00:17:05.971 }, 00:17:05.971 "auth": { 00:17:05.971 "state": "completed", 00:17:05.971 "digest": "sha512", 00:17:05.971 "dhgroup": "ffdhe3072" 00:17:05.971 } 00:17:05.971 } 00:17:05.971 ]' 00:17:05.971 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.229 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.229 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.229 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.229 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.229 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.229 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.229 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.488 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:06.488 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.056 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.315 00:17:07.315 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.315 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.315 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.574 { 00:17:07.574 "cntlid": 119, 00:17:07.574 "qid": 0, 00:17:07.574 "state": "enabled", 00:17:07.574 "thread": "nvmf_tgt_poll_group_000", 00:17:07.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:07.574 "listen_address": { 00:17:07.574 "trtype": "TCP", 00:17:07.574 "adrfam": "IPv4", 00:17:07.574 "traddr": "10.0.0.2", 00:17:07.574 "trsvcid": "4420" 00:17:07.574 }, 00:17:07.574 "peer_address": { 00:17:07.574 "trtype": "TCP", 00:17:07.574 "adrfam": "IPv4", 00:17:07.574 "traddr": "10.0.0.1", 00:17:07.574 "trsvcid": "58994" 00:17:07.574 }, 00:17:07.574 "auth": { 00:17:07.574 "state": "completed", 00:17:07.574 "digest": "sha512", 00:17:07.574 "dhgroup": "ffdhe3072" 00:17:07.574 } 00:17:07.574 } 00:17:07.574 ]' 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.574 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.833 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:07.833 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:08.399 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.399 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:08.399 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.399 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.399 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.399 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.399 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.399 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:08.399 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.658 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.917 00:17:08.917 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.917 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.917 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.176 { 00:17:09.176 "cntlid": 121, 00:17:09.176 "qid": 0, 00:17:09.176 "state": "enabled", 00:17:09.176 "thread": "nvmf_tgt_poll_group_000", 00:17:09.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:09.176 "listen_address": { 00:17:09.176 "trtype": "TCP", 00:17:09.176 "adrfam": "IPv4", 00:17:09.176 "traddr": "10.0.0.2", 00:17:09.176 "trsvcid": "4420" 00:17:09.176 }, 00:17:09.176 "peer_address": { 00:17:09.176 "trtype": "TCP", 00:17:09.176 "adrfam": "IPv4", 00:17:09.176 "traddr": "10.0.0.1", 00:17:09.176 "trsvcid": "59022" 00:17:09.176 }, 00:17:09.176 "auth": { 00:17:09.176 "state": "completed", 00:17:09.176 "digest": "sha512", 00:17:09.176 "dhgroup": "ffdhe4096" 00:17:09.176 } 00:17:09.176 } 00:17:09.176 ]' 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.176 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.435 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:09.435 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.004 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.263 00:17:10.263 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.263 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.263 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.523 { 00:17:10.523 "cntlid": 123, 00:17:10.523 "qid": 0, 00:17:10.523 "state": "enabled", 00:17:10.523 "thread": "nvmf_tgt_poll_group_000", 00:17:10.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:10.523 "listen_address": { 00:17:10.523 "trtype": "TCP", 00:17:10.523 "adrfam": "IPv4", 00:17:10.523 "traddr": "10.0.0.2", 00:17:10.523 "trsvcid": "4420" 00:17:10.523 }, 00:17:10.523 "peer_address": { 00:17:10.523 "trtype": "TCP", 00:17:10.523 "adrfam": "IPv4", 00:17:10.523 "traddr": "10.0.0.1", 00:17:10.523 "trsvcid": "59038" 00:17:10.523 }, 00:17:10.523 "auth": { 00:17:10.523 "state": "completed", 00:17:10.523 "digest": "sha512", 00:17:10.523 "dhgroup": "ffdhe4096" 00:17:10.523 } 00:17:10.523 } 00:17:10.523 ]' 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.523 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.783 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.783 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.783 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.783 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:17:10.783 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:17:11.351 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.351 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:11.351 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.351 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.351 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.351 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.351 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.351 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.610 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.869 00:17:11.869 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.869 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.869 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.128 { 00:17:12.128 "cntlid": 125, 00:17:12.128 "qid": 0, 00:17:12.128 "state": "enabled", 00:17:12.128 "thread": "nvmf_tgt_poll_group_000", 00:17:12.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:12.128 "listen_address": { 00:17:12.128 "trtype": "TCP", 00:17:12.128 "adrfam": "IPv4", 00:17:12.128 "traddr": "10.0.0.2", 00:17:12.128 "trsvcid": "4420" 00:17:12.128 }, 00:17:12.128 "peer_address": { 00:17:12.128 "trtype": "TCP", 00:17:12.128 "adrfam": "IPv4", 00:17:12.128 "traddr": "10.0.0.1", 00:17:12.128 "trsvcid": "44748" 00:17:12.128 }, 00:17:12.128 "auth": { 00:17:12.128 "state": "completed", 00:17:12.128 "digest": "sha512", 00:17:12.128 "dhgroup": "ffdhe4096" 00:17:12.128 } 00:17:12.128 } 00:17:12.128 ]' 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.128 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.129 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.129 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.129 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.389 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:12.389 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.954 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.212 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.472 { 00:17:13.472 "cntlid": 127, 00:17:13.472 "qid": 0, 00:17:13.472 "state": "enabled", 00:17:13.472 "thread": "nvmf_tgt_poll_group_000", 00:17:13.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:13.472 "listen_address": { 00:17:13.472 "trtype": "TCP", 00:17:13.472 "adrfam": "IPv4", 00:17:13.472 "traddr": "10.0.0.2", 00:17:13.472 "trsvcid": "4420" 00:17:13.472 }, 00:17:13.472 "peer_address": { 00:17:13.472 "trtype": "TCP", 00:17:13.472 "adrfam": "IPv4", 00:17:13.472 "traddr": "10.0.0.1", 00:17:13.472 "trsvcid": "44768" 00:17:13.472 }, 00:17:13.472 "auth": { 00:17:13.472 "state": "completed", 00:17:13.472 "digest": "sha512", 00:17:13.472 "dhgroup": "ffdhe4096" 00:17:13.472 } 00:17:13.472 } 00:17:13.472 ]' 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.472 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.730 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.731 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.731 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.731 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.731 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.731 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:13.731 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:14.296 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.296 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:14.296 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.296 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.296 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.296 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.296 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.296 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.296 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.552 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.553 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.810 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.068 { 00:17:15.068 "cntlid": 129, 00:17:15.068 "qid": 0, 00:17:15.068 "state": "enabled", 00:17:15.068 "thread": "nvmf_tgt_poll_group_000", 00:17:15.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:15.068 "listen_address": { 00:17:15.068 "trtype": "TCP", 00:17:15.068 "adrfam": "IPv4", 00:17:15.068 "traddr": "10.0.0.2", 00:17:15.068 "trsvcid": "4420" 00:17:15.068 }, 00:17:15.068 "peer_address": { 00:17:15.068 "trtype": "TCP", 00:17:15.068 "adrfam": "IPv4", 00:17:15.068 "traddr": "10.0.0.1", 00:17:15.068 "trsvcid": "44782" 00:17:15.068 }, 00:17:15.068 "auth": { 00:17:15.068 "state": "completed", 00:17:15.068 "digest": "sha512", 00:17:15.068 "dhgroup": "ffdhe6144" 00:17:15.068 } 00:17:15.068 } 00:17:15.068 ]' 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.068 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.325 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.325 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.325 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.325 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.325 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.325 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:15.325 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:15.890 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.890 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:15.890 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.890 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.890 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.890 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.890 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:15.890 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.148 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.405 00:17:16.405 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.405 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.405 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.662 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.662 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.662 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.662 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.662 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.662 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.662 { 00:17:16.662 "cntlid": 131, 00:17:16.662 "qid": 0, 00:17:16.662 "state": "enabled", 00:17:16.662 "thread": "nvmf_tgt_poll_group_000", 00:17:16.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:16.662 "listen_address": { 00:17:16.662 "trtype": "TCP", 00:17:16.662 "adrfam": "IPv4", 00:17:16.662 "traddr": "10.0.0.2", 00:17:16.662 "trsvcid": "4420" 00:17:16.662 }, 00:17:16.662 "peer_address": { 00:17:16.662 "trtype": "TCP", 00:17:16.662 "adrfam": "IPv4", 00:17:16.662 "traddr": "10.0.0.1", 00:17:16.662 "trsvcid": "44808" 00:17:16.662 }, 00:17:16.662 "auth": { 00:17:16.662 "state": "completed", 00:17:16.662 "digest": "sha512", 00:17:16.662 "dhgroup": "ffdhe6144" 00:17:16.662 } 00:17:16.662 } 00:17:16.662 ]' 00:17:16.663 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.663 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.663 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.663 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.663 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.921 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.921 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.921 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.921 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:17:16.921 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:17:17.487 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.487 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:17.487 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.487 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.487 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.487 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.487 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.487 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.744 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.001 00:17:18.001 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.001 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.001 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.258 { 00:17:18.258 "cntlid": 133, 00:17:18.258 "qid": 0, 00:17:18.258 "state": "enabled", 00:17:18.258 "thread": "nvmf_tgt_poll_group_000", 00:17:18.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:18.258 "listen_address": { 00:17:18.258 "trtype": "TCP", 00:17:18.258 "adrfam": "IPv4", 00:17:18.258 "traddr": "10.0.0.2", 00:17:18.258 "trsvcid": "4420" 00:17:18.258 }, 00:17:18.258 "peer_address": { 00:17:18.258 "trtype": "TCP", 00:17:18.258 "adrfam": "IPv4", 00:17:18.258 "traddr": "10.0.0.1", 00:17:18.258 "trsvcid": "44824" 00:17:18.258 }, 00:17:18.258 "auth": { 00:17:18.258 "state": "completed", 00:17:18.258 "digest": "sha512", 00:17:18.258 "dhgroup": "ffdhe6144" 00:17:18.258 } 00:17:18.258 } 00:17:18.258 ]' 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.258 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.259 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.517 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.517 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.517 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.517 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:18.517 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:19.085 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.085 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:19.085 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.085 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.085 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.085 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.085 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.085 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.344 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.602 00:17:19.602 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.602 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.602 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.861 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.861 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.861 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.861 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.861 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.861 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.861 { 00:17:19.861 "cntlid": 135, 00:17:19.861 "qid": 0, 00:17:19.861 "state": "enabled", 00:17:19.861 "thread": "nvmf_tgt_poll_group_000", 00:17:19.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:19.861 "listen_address": { 00:17:19.861 "trtype": "TCP", 00:17:19.861 "adrfam": "IPv4", 00:17:19.861 "traddr": "10.0.0.2", 00:17:19.861 "trsvcid": "4420" 00:17:19.861 }, 00:17:19.861 "peer_address": { 00:17:19.861 "trtype": "TCP", 00:17:19.861 "adrfam": "IPv4", 00:17:19.861 "traddr": "10.0.0.1", 00:17:19.861 "trsvcid": "44854" 00:17:19.861 }, 00:17:19.861 "auth": { 00:17:19.861 "state": "completed", 00:17:19.861 "digest": "sha512", 00:17:19.861 "dhgroup": "ffdhe6144" 00:17:19.861 } 00:17:19.861 } 00:17:19.861 ]' 00:17:19.861 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.861 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.862 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.862 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.862 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.862 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.862 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.862 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.120 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:20.121 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:20.688 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.688 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:20.688 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.688 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.688 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.688 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.688 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.688 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.688 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.948 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.207 00:17:21.207 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.207 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.207 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.465 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.465 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.465 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.465 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.465 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.465 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.465 { 00:17:21.465 "cntlid": 137, 00:17:21.465 "qid": 0, 00:17:21.465 "state": "enabled", 00:17:21.465 "thread": "nvmf_tgt_poll_group_000", 00:17:21.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:21.465 "listen_address": { 00:17:21.465 "trtype": "TCP", 00:17:21.465 "adrfam": "IPv4", 00:17:21.465 "traddr": "10.0.0.2", 00:17:21.465 "trsvcid": "4420" 00:17:21.465 }, 00:17:21.465 "peer_address": { 00:17:21.465 "trtype": "TCP", 00:17:21.465 "adrfam": "IPv4", 00:17:21.465 "traddr": "10.0.0.1", 00:17:21.465 "trsvcid": "51888" 00:17:21.465 }, 00:17:21.465 "auth": { 00:17:21.465 "state": "completed", 00:17:21.465 "digest": "sha512", 00:17:21.465 "dhgroup": "ffdhe8192" 00:17:21.465 } 00:17:21.465 } 00:17:21.465 ]' 00:17:21.465 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.465 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.465 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.723 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.723 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.723 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.723 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.723 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.981 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:21.981 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:22.240 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.499 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.066 00:17:23.066 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.066 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.066 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.325 { 00:17:23.325 "cntlid": 139, 00:17:23.325 "qid": 0, 00:17:23.325 "state": "enabled", 00:17:23.325 "thread": "nvmf_tgt_poll_group_000", 00:17:23.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:23.325 "listen_address": { 00:17:23.325 "trtype": "TCP", 00:17:23.325 "adrfam": "IPv4", 00:17:23.325 "traddr": "10.0.0.2", 00:17:23.325 "trsvcid": "4420" 00:17:23.325 }, 00:17:23.325 "peer_address": { 00:17:23.325 "trtype": "TCP", 00:17:23.325 "adrfam": "IPv4", 00:17:23.325 "traddr": "10.0.0.1", 00:17:23.325 "trsvcid": "51912" 00:17:23.325 }, 00:17:23.325 "auth": { 00:17:23.325 "state": "completed", 00:17:23.325 "digest": "sha512", 00:17:23.325 "dhgroup": "ffdhe8192" 00:17:23.325 } 00:17:23.325 } 00:17:23.325 ]' 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.325 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.584 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:17:23.584 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: --dhchap-ctrl-secret DHHC-1:02:ZWIzNjA3YTc0Y2Y2ZTUzMmE1MTIyZmUxM2VjNjg5ZTJkNTJiM2YxZDEwODMwMzdhtpV1hg==: 00:17:24.153 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.153 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:24.153 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.153 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.153 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.153 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.153 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.153 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.412 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.413 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.413 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.672 00:17:24.672 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.672 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.672 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.930 { 00:17:24.930 "cntlid": 141, 00:17:24.930 "qid": 0, 00:17:24.930 "state": "enabled", 00:17:24.930 "thread": "nvmf_tgt_poll_group_000", 00:17:24.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:24.930 "listen_address": { 00:17:24.930 "trtype": "TCP", 00:17:24.930 "adrfam": "IPv4", 00:17:24.930 "traddr": "10.0.0.2", 00:17:24.930 "trsvcid": "4420" 00:17:24.930 }, 00:17:24.930 "peer_address": { 00:17:24.930 "trtype": "TCP", 00:17:24.930 "adrfam": "IPv4", 00:17:24.930 "traddr": "10.0.0.1", 00:17:24.930 "trsvcid": "51928" 00:17:24.930 }, 00:17:24.930 "auth": { 00:17:24.930 "state": "completed", 00:17:24.930 "digest": "sha512", 00:17:24.930 "dhgroup": "ffdhe8192" 00:17:24.930 } 00:17:24.930 } 00:17:24.930 ]' 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.930 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.189 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:25.189 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjM2RmZjEzOWFiYjFmNWNkZWEwMjQxZWVhMTE0YmPVzsd0: 00:17:25.756 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.756 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:25.756 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.757 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.757 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.757 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.757 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:25.757 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.016 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.274 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.533 { 00:17:26.533 "cntlid": 143, 00:17:26.533 "qid": 0, 00:17:26.533 "state": "enabled", 00:17:26.533 "thread": "nvmf_tgt_poll_group_000", 00:17:26.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:26.533 "listen_address": { 00:17:26.533 "trtype": "TCP", 00:17:26.533 "adrfam": "IPv4", 00:17:26.533 "traddr": "10.0.0.2", 00:17:26.533 "trsvcid": "4420" 00:17:26.533 }, 00:17:26.533 "peer_address": { 00:17:26.533 "trtype": "TCP", 00:17:26.533 "adrfam": "IPv4", 00:17:26.533 "traddr": "10.0.0.1", 00:17:26.533 "trsvcid": "51962" 00:17:26.533 }, 00:17:26.533 "auth": { 00:17:26.533 "state": "completed", 00:17:26.533 "digest": "sha512", 00:17:26.533 "dhgroup": "ffdhe8192" 00:17:26.533 } 00:17:26.533 } 00:17:26.533 ]' 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.533 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.792 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.792 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.792 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.792 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.792 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.792 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:26.792 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.359 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.618 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:27.618 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.618 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.618 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.618 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.618 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.618 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.618 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.618 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.619 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.619 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.619 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.619 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.185 00:17:28.185 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.185 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.185 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.185 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.443 { 00:17:28.443 "cntlid": 145, 00:17:28.443 "qid": 0, 00:17:28.443 "state": "enabled", 00:17:28.443 "thread": "nvmf_tgt_poll_group_000", 00:17:28.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:28.443 "listen_address": { 00:17:28.443 "trtype": "TCP", 00:17:28.443 "adrfam": "IPv4", 00:17:28.443 "traddr": "10.0.0.2", 00:17:28.443 "trsvcid": "4420" 00:17:28.443 }, 00:17:28.443 "peer_address": { 00:17:28.443 "trtype": "TCP", 00:17:28.443 "adrfam": "IPv4", 00:17:28.443 "traddr": "10.0.0.1", 00:17:28.443 "trsvcid": "51986" 00:17:28.443 }, 00:17:28.443 "auth": { 00:17:28.443 "state": "completed", 00:17:28.443 "digest": "sha512", 00:17:28.443 "dhgroup": "ffdhe8192" 00:17:28.443 } 00:17:28.443 } 00:17:28.443 ]' 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.443 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.702 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:28.702 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjBmNjhiZWQ1ZGUxNTc3ZjBiMmJhZTg5MzM0YzNlMmQ3ZTYyZmIwMzRkYTU2ZGRmjYzBzg==: --dhchap-ctrl-secret DHHC-1:03:NGMyNDI4NzE4NjdhM2Y1ZTUwYWFjZTI2ZDViM2E0ZDZmM2NkYzA1ODUzNzlkNjY5YjI1M2VjNmQ2YWIyNTMzMFDYq1s=: 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:29.269 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:29.270 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.270 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:29.270 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.270 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:29.270 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:29.270 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:29.528 request: 00:17:29.529 { 00:17:29.529 "name": "nvme0", 00:17:29.529 "trtype": "tcp", 00:17:29.529 "traddr": "10.0.0.2", 00:17:29.529 "adrfam": "ipv4", 00:17:29.529 "trsvcid": "4420", 00:17:29.529 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:29.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:29.529 "prchk_reftag": false, 00:17:29.529 "prchk_guard": false, 00:17:29.529 "hdgst": false, 00:17:29.529 "ddgst": false, 00:17:29.529 "dhchap_key": "key2", 00:17:29.529 "allow_unrecognized_csi": false, 00:17:29.529 "method": "bdev_nvme_attach_controller", 00:17:29.529 "req_id": 1 00:17:29.529 } 00:17:29.529 Got JSON-RPC error response 00:17:29.529 response: 00:17:29.529 { 00:17:29.529 "code": -5, 00:17:29.529 "message": "Input/output error" 00:17:29.529 } 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.529 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:30.096 request: 00:17:30.096 { 00:17:30.096 "name": "nvme0", 00:17:30.096 "trtype": "tcp", 00:17:30.096 "traddr": "10.0.0.2", 00:17:30.096 "adrfam": "ipv4", 00:17:30.096 "trsvcid": "4420", 00:17:30.096 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:30.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:30.096 "prchk_reftag": false, 00:17:30.096 "prchk_guard": false, 00:17:30.096 "hdgst": false, 00:17:30.096 "ddgst": false, 00:17:30.096 "dhchap_key": "key1", 00:17:30.096 "dhchap_ctrlr_key": "ckey2", 00:17:30.096 "allow_unrecognized_csi": false, 00:17:30.096 "method": "bdev_nvme_attach_controller", 00:17:30.096 "req_id": 1 00:17:30.096 } 00:17:30.096 Got JSON-RPC error response 00:17:30.096 response: 00:17:30.096 { 00:17:30.096 "code": -5, 00:17:30.096 "message": "Input/output error" 00:17:30.096 } 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.096 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.097 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.355 request: 00:17:30.355 { 00:17:30.355 "name": "nvme0", 00:17:30.355 "trtype": "tcp", 00:17:30.355 "traddr": "10.0.0.2", 00:17:30.355 "adrfam": "ipv4", 00:17:30.355 "trsvcid": "4420", 00:17:30.355 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:30.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:30.355 "prchk_reftag": false, 00:17:30.355 "prchk_guard": false, 00:17:30.355 "hdgst": false, 00:17:30.355 "ddgst": false, 00:17:30.355 "dhchap_key": "key1", 00:17:30.355 "dhchap_ctrlr_key": "ckey1", 00:17:30.355 "allow_unrecognized_csi": false, 00:17:30.355 "method": "bdev_nvme_attach_controller", 00:17:30.355 "req_id": 1 00:17:30.355 } 00:17:30.355 Got JSON-RPC error response 00:17:30.355 response: 00:17:30.355 { 00:17:30.355 "code": -5, 00:17:30.355 "message": "Input/output error" 00:17:30.355 } 00:17:30.355 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:30.355 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:30.355 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:30.355 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:30.355 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:30.355 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.355 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1698720 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1698720 ']' 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1698720 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1698720 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1698720' 00:17:30.614 killing process with pid 1698720 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1698720 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1698720 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1721033 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1721033 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1721033 ']' 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.614 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1721033 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1721033 ']' 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.875 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.134 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.134 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:31.134 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:31.134 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.134 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.134 null0 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yFx 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.7xi ]] 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7xi 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.134 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pUK 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Rh8 ]] 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rh8 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.XoO 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.RNi ]] 00:17:31.392 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RNi 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oFq 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.393 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.959 nvme0n1 00:17:31.959 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.959 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.959 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.217 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.217 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.217 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.217 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.217 { 00:17:32.217 "cntlid": 1, 00:17:32.217 "qid": 0, 00:17:32.217 "state": "enabled", 00:17:32.217 "thread": "nvmf_tgt_poll_group_000", 00:17:32.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:32.217 "listen_address": { 00:17:32.217 "trtype": "TCP", 00:17:32.217 "adrfam": "IPv4", 00:17:32.217 "traddr": "10.0.0.2", 00:17:32.217 "trsvcid": "4420" 00:17:32.217 }, 00:17:32.217 "peer_address": { 00:17:32.217 "trtype": "TCP", 00:17:32.217 "adrfam": "IPv4", 00:17:32.217 "traddr": "10.0.0.1", 00:17:32.217 "trsvcid": "59588" 00:17:32.217 }, 00:17:32.217 "auth": { 00:17:32.217 "state": "completed", 00:17:32.217 "digest": "sha512", 00:17:32.217 "dhgroup": "ffdhe8192" 00:17:32.217 } 00:17:32.217 } 00:17:32.217 ]' 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.217 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.475 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:32.475 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:33.041 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.041 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:33.041 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.042 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.042 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.042 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:33.042 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.042 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.042 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.042 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:33.042 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.300 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.559 request: 00:17:33.559 { 00:17:33.559 "name": "nvme0", 00:17:33.559 "trtype": "tcp", 00:17:33.559 "traddr": "10.0.0.2", 00:17:33.559 "adrfam": "ipv4", 00:17:33.559 "trsvcid": "4420", 00:17:33.559 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:33.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:33.559 "prchk_reftag": false, 00:17:33.559 "prchk_guard": false, 00:17:33.559 "hdgst": false, 00:17:33.559 "ddgst": false, 00:17:33.559 "dhchap_key": "key3", 00:17:33.559 "allow_unrecognized_csi": false, 00:17:33.559 "method": "bdev_nvme_attach_controller", 00:17:33.559 "req_id": 1 00:17:33.559 } 00:17:33.559 Got JSON-RPC error response 00:17:33.559 response: 00:17:33.559 { 00:17:33.559 "code": -5, 00:17:33.559 "message": "Input/output error" 00:17:33.559 } 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.559 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.818 request: 00:17:33.818 { 00:17:33.818 "name": "nvme0", 00:17:33.818 "trtype": "tcp", 00:17:33.818 "traddr": "10.0.0.2", 00:17:33.818 "adrfam": "ipv4", 00:17:33.818 "trsvcid": "4420", 00:17:33.818 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:33.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:33.818 "prchk_reftag": false, 00:17:33.818 "prchk_guard": false, 00:17:33.818 "hdgst": false, 00:17:33.818 "ddgst": false, 00:17:33.818 "dhchap_key": "key3", 00:17:33.818 "allow_unrecognized_csi": false, 00:17:33.818 "method": "bdev_nvme_attach_controller", 00:17:33.818 "req_id": 1 00:17:33.818 } 00:17:33.818 Got JSON-RPC error response 00:17:33.818 response: 00:17:33.818 { 00:17:33.818 "code": -5, 00:17:33.818 "message": "Input/output error" 00:17:33.818 } 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:33.818 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.077 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.335 request: 00:17:34.335 { 00:17:34.335 "name": "nvme0", 00:17:34.335 "trtype": "tcp", 00:17:34.335 "traddr": "10.0.0.2", 00:17:34.335 "adrfam": "ipv4", 00:17:34.335 "trsvcid": "4420", 00:17:34.335 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:34.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:34.335 "prchk_reftag": false, 00:17:34.335 "prchk_guard": false, 00:17:34.335 "hdgst": false, 00:17:34.335 "ddgst": false, 00:17:34.335 "dhchap_key": "key0", 00:17:34.335 "dhchap_ctrlr_key": "key1", 00:17:34.335 "allow_unrecognized_csi": false, 00:17:34.335 "method": "bdev_nvme_attach_controller", 00:17:34.335 "req_id": 1 00:17:34.335 } 00:17:34.335 Got JSON-RPC error response 00:17:34.335 response: 00:17:34.335 { 00:17:34.335 "code": -5, 00:17:34.335 "message": "Input/output error" 00:17:34.335 } 00:17:34.335 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:34.335 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.335 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.335 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.335 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:34.335 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:34.335 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:34.592 nvme0n1 00:17:34.592 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:34.593 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:34.593 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.850 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.850 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.850 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.108 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:17:35.108 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.108 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.108 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.108 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:35.108 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:35.109 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:35.677 nvme0n1 00:17:35.677 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:35.677 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:35.677 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.935 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.935 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:35.935 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.935 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.935 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.935 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:35.935 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:35.935 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.194 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.194 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:36.194 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: --dhchap-ctrl-secret DHHC-1:03:N2IwMmI0NTIwZDUyMWMxMGZiMWMyMDgyMDJhZjVkMWE4YWNiYzc4NTA4MjM2ZjI2NTgzNzc1NGNhZjE3YjQyZKhXQDs=: 00:17:36.761 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:36.761 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:36.762 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:37.329 request: 00:17:37.329 { 00:17:37.329 "name": "nvme0", 00:17:37.329 "trtype": "tcp", 00:17:37.329 "traddr": "10.0.0.2", 00:17:37.329 "adrfam": "ipv4", 00:17:37.329 "trsvcid": "4420", 00:17:37.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:37.329 "prchk_reftag": false, 00:17:37.330 "prchk_guard": false, 00:17:37.330 "hdgst": false, 00:17:37.330 "ddgst": false, 00:17:37.330 "dhchap_key": "key1", 00:17:37.330 "allow_unrecognized_csi": false, 00:17:37.330 "method": "bdev_nvme_attach_controller", 00:17:37.330 "req_id": 1 00:17:37.330 } 00:17:37.330 Got JSON-RPC error response 00:17:37.330 response: 00:17:37.330 { 00:17:37.330 "code": -5, 00:17:37.330 "message": "Input/output error" 00:17:37.330 } 00:17:37.330 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:37.330 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.330 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:37.330 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.330 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:37.330 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:37.330 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:37.897 nvme0n1 00:17:37.897 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:37.897 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:37.897 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.156 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.156 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.156 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.415 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:38.415 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.415 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.415 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.415 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:38.415 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:38.415 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:38.415 nvme0n1 00:17:38.688 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:38.688 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:38.688 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.688 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.688 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.688 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: '' 2s 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: ]] 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTM5YzNmOWJmMmRiNDJhM2ViNjJkYWEzYjljZWZmNTBr6Nxs: 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:39.015 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: 2s 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:40.920 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:40.921 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: 00:17:40.921 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:40.921 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:40.921 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:40.921 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: ]] 00:17:40.921 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWVhYzA2ZGViY2EwNzEwNjM5MDliMGQ3ZDhmZDY0N2NkOWJiZTY1YWU4ODY2ODE3I1daoQ==: 00:17:40.921 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:40.921 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:43.448 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:43.705 nvme0n1 00:17:43.705 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:43.705 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.705 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.705 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.705 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:43.705 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:44.271 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:44.271 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:44.271 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.529 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.529 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:44.529 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.529 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.529 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.529 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:44.529 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:44.529 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:44.529 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.530 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:44.788 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:45.355 request: 00:17:45.355 { 00:17:45.355 "name": "nvme0", 00:17:45.355 "dhchap_key": "key1", 00:17:45.355 "dhchap_ctrlr_key": "key3", 00:17:45.355 "method": "bdev_nvme_set_keys", 00:17:45.355 "req_id": 1 00:17:45.355 } 00:17:45.355 Got JSON-RPC error response 00:17:45.355 response: 00:17:45.355 { 00:17:45.355 "code": -13, 00:17:45.356 "message": "Permission denied" 00:17:45.356 } 00:17:45.356 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:45.356 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:45.356 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:45.356 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:45.356 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:45.356 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:45.356 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.356 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:45.356 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:46.292 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:46.292 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:46.292 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.551 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:46.551 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:46.551 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.551 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.551 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.551 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:46.551 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:46.551 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:47.487 nvme0n1 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:47.487 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:47.745 request: 00:17:47.745 { 00:17:47.745 "name": "nvme0", 00:17:47.745 "dhchap_key": "key2", 00:17:47.745 "dhchap_ctrlr_key": "key0", 00:17:47.745 "method": "bdev_nvme_set_keys", 00:17:47.745 "req_id": 1 00:17:47.745 } 00:17:47.745 Got JSON-RPC error response 00:17:47.745 response: 00:17:47.745 { 00:17:47.745 "code": -13, 00:17:47.745 "message": "Permission denied" 00:17:47.745 } 00:17:47.746 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:47.746 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.746 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.746 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.746 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:47.746 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:47.746 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.002 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:48.002 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:48.935 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:48.935 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:48.935 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.193 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:49.193 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:49.193 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1698742 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1698742 ']' 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1698742 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1698742 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1698742' 00:17:49.194 killing process with pid 1698742 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1698742 00:17:49.194 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1698742 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:49.453 rmmod nvme_tcp 00:17:49.453 rmmod nvme_fabrics 00:17:49.453 rmmod nvme_keyring 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1721033 ']' 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1721033 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1721033 ']' 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1721033 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.453 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1721033 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1721033' 00:17:49.713 killing process with pid 1721033 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1721033 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1721033 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.713 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yFx /tmp/spdk.key-sha256.pUK /tmp/spdk.key-sha384.XoO /tmp/spdk.key-sha512.oFq /tmp/spdk.key-sha512.7xi /tmp/spdk.key-sha384.Rh8 /tmp/spdk.key-sha256.RNi '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:52.256 00:17:52.256 real 2m23.916s 00:17:52.256 user 5m30.096s 00:17:52.256 sys 0m23.736s 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.256 ************************************ 00:17:52.256 END TEST nvmf_auth_target 00:17:52.256 ************************************ 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.256 ************************************ 00:17:52.256 START TEST nvmf_bdevio_no_huge 00:17:52.256 ************************************ 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:52.256 * Looking for test storage... 00:17:52.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.256 --rc genhtml_branch_coverage=1 00:17:52.256 --rc genhtml_function_coverage=1 00:17:52.256 --rc genhtml_legend=1 00:17:52.256 --rc geninfo_all_blocks=1 00:17:52.256 --rc geninfo_unexecuted_blocks=1 00:17:52.256 00:17:52.256 ' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.256 --rc genhtml_branch_coverage=1 00:17:52.256 --rc genhtml_function_coverage=1 00:17:52.256 --rc genhtml_legend=1 00:17:52.256 --rc geninfo_all_blocks=1 00:17:52.256 --rc geninfo_unexecuted_blocks=1 00:17:52.256 00:17:52.256 ' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.256 --rc genhtml_branch_coverage=1 00:17:52.256 --rc genhtml_function_coverage=1 00:17:52.256 --rc genhtml_legend=1 00:17:52.256 --rc geninfo_all_blocks=1 00:17:52.256 --rc geninfo_unexecuted_blocks=1 00:17:52.256 00:17:52.256 ' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.256 --rc genhtml_branch_coverage=1 00:17:52.256 --rc genhtml_function_coverage=1 00:17:52.256 --rc genhtml_legend=1 00:17:52.256 --rc geninfo_all_blocks=1 00:17:52.256 --rc geninfo_unexecuted_blocks=1 00:17:52.256 00:17:52.256 ' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.256 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:52.257 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:58.824 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:58.825 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:58.825 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:58.825 Found net devices under 0000:af:00.0: cvl_0_0 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:58.825 Found net devices under 0000:af:00.1: cvl_0_1 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:58.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:17:58.825 00:17:58.825 --- 10.0.0.2 ping statistics --- 00:17:58.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.825 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:17:58.825 00:17:58.825 --- 10.0.0.1 ping statistics --- 00:17:58.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.825 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1728143 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1728143 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1728143 ']' 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.825 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.825 [2024-12-06 11:19:30.901327] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:17:58.825 [2024-12-06 11:19:30.901367] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:58.825 [2024-12-06 11:19:30.980454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.825 [2024-12-06 11:19:31.024072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.825 [2024-12-06 11:19:31.024103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.825 [2024-12-06 11:19:31.024110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.826 [2024-12-06 11:19:31.024117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.826 [2024-12-06 11:19:31.024122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.826 [2024-12-06 11:19:31.025362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:58.826 [2024-12-06 11:19:31.025478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:58.826 [2024-12-06 11:19:31.025564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.826 [2024-12-06 11:19:31.025565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:58.826 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.826 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:58.826 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.826 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.826 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.826 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.826 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.826 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.826 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.826 [2024-12-06 11:19:31.754699] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.084 Malloc0 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.084 [2024-12-06 11:19:31.798991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.084 { 00:17:59.084 "params": { 00:17:59.084 "name": "Nvme$subsystem", 00:17:59.084 "trtype": "$TEST_TRANSPORT", 00:17:59.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.084 "adrfam": "ipv4", 00:17:59.084 "trsvcid": "$NVMF_PORT", 00:17:59.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.084 "hdgst": ${hdgst:-false}, 00:17:59.084 "ddgst": ${ddgst:-false} 00:17:59.084 }, 00:17:59.084 "method": "bdev_nvme_attach_controller" 00:17:59.084 } 00:17:59.084 EOF 00:17:59.084 )") 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:59.084 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:59.084 "params": { 00:17:59.084 "name": "Nvme1", 00:17:59.084 "trtype": "tcp", 00:17:59.084 "traddr": "10.0.0.2", 00:17:59.084 "adrfam": "ipv4", 00:17:59.084 "trsvcid": "4420", 00:17:59.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.084 "hdgst": false, 00:17:59.084 "ddgst": false 00:17:59.084 }, 00:17:59.084 "method": "bdev_nvme_attach_controller" 00:17:59.084 }' 00:17:59.084 [2024-12-06 11:19:31.848566] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:17:59.084 [2024-12-06 11:19:31.848612] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1728349 ] 00:17:59.084 [2024-12-06 11:19:31.925687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:59.084 [2024-12-06 11:19:31.970539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.084 [2024-12-06 11:19:31.970650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.084 [2024-12-06 11:19:31.970651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.341 I/O targets: 00:17:59.341 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:59.341 00:17:59.341 00:17:59.341 CUnit - A unit testing framework for C - Version 2.1-3 00:17:59.341 http://cunit.sourceforge.net/ 00:17:59.341 00:17:59.341 00:17:59.341 Suite: bdevio tests on: Nvme1n1 00:17:59.341 Test: blockdev write read block ...passed 00:17:59.341 Test: blockdev write zeroes read block ...passed 00:17:59.341 Test: blockdev write zeroes read no split ...passed 00:17:59.599 Test: blockdev write zeroes read split ...passed 00:17:59.599 Test: blockdev write zeroes read split partial ...passed 00:17:59.599 Test: blockdev reset ...[2024-12-06 11:19:32.291646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:59.599 [2024-12-06 11:19:32.291708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c7b10 (9): Bad file descriptor 00:17:59.599 [2024-12-06 11:19:32.306799] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:59.599 passed 00:17:59.599 Test: blockdev write read 8 blocks ...passed 00:17:59.599 Test: blockdev write read size > 128k ...passed 00:17:59.599 Test: blockdev write read invalid size ...passed 00:17:59.599 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:59.599 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:59.599 Test: blockdev write read max offset ...passed 00:17:59.599 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:59.599 Test: blockdev writev readv 8 blocks ...passed 00:17:59.599 Test: blockdev writev readv 30 x 1block ...passed 00:17:59.857 Test: blockdev writev readv block ...passed 00:17:59.857 Test: blockdev writev readv size > 128k ...passed 00:17:59.857 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:59.857 Test: blockdev comparev and writev ...[2024-12-06 11:19:32.559824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.857 [2024-12-06 11:19:32.559853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.559865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.857 [2024-12-06 11:19:32.559872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.560099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.857 [2024-12-06 11:19:32.560109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.560120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.857 [2024-12-06 11:19:32.560126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.560343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.857 [2024-12-06 11:19:32.560352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.560362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.857 [2024-12-06 11:19:32.560369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.560591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.857 [2024-12-06 11:19:32.560600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.560610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:59.857 [2024-12-06 11:19:32.560617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:59.857 passed 00:17:59.857 Test: blockdev nvme passthru rw ...passed 00:17:59.857 Test: blockdev nvme passthru vendor specific ...[2024-12-06 11:19:32.642456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.857 [2024-12-06 11:19:32.642470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.642567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.857 [2024-12-06 11:19:32.642576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.642676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.857 [2024-12-06 11:19:32.642685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:59.857 [2024-12-06 11:19:32.642785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.857 [2024-12-06 11:19:32.642793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:59.857 passed 00:17:59.857 Test: blockdev nvme admin passthru ...passed 00:17:59.858 Test: blockdev copy ...passed 00:17:59.858 00:17:59.858 Run Summary: Type Total Ran Passed Failed Inactive 00:17:59.858 suites 1 1 n/a 0 0 00:17:59.858 tests 23 23 23 0 0 00:17:59.858 asserts 152 152 152 0 n/a 00:17:59.858 00:17:59.858 Elapsed time = 1.079 seconds 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:00.116 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:00.116 rmmod nvme_tcp 00:18:00.116 rmmod nvme_fabrics 00:18:00.116 rmmod nvme_keyring 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1728143 ']' 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1728143 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1728143 ']' 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1728143 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.116 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1728143 00:18:00.375 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:00.375 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:00.375 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1728143' 00:18:00.375 killing process with pid 1728143 00:18:00.375 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1728143 00:18:00.375 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1728143 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.633 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.538 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.538 00:18:02.538 real 0m10.746s 00:18:02.538 user 0m13.018s 00:18:02.538 sys 0m5.297s 00:18:02.538 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.538 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.538 ************************************ 00:18:02.538 END TEST nvmf_bdevio_no_huge 00:18:02.538 ************************************ 00:18:02.538 11:19:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:02.538 11:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.538 11:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.538 11:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.798 ************************************ 00:18:02.798 START TEST nvmf_tls 00:18:02.798 ************************************ 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:02.798 * Looking for test storage... 00:18:02.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:02.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.798 --rc genhtml_branch_coverage=1 00:18:02.798 --rc genhtml_function_coverage=1 00:18:02.798 --rc genhtml_legend=1 00:18:02.798 --rc geninfo_all_blocks=1 00:18:02.798 --rc geninfo_unexecuted_blocks=1 00:18:02.798 00:18:02.798 ' 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:02.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.798 --rc genhtml_branch_coverage=1 00:18:02.798 --rc genhtml_function_coverage=1 00:18:02.798 --rc genhtml_legend=1 00:18:02.798 --rc geninfo_all_blocks=1 00:18:02.798 --rc geninfo_unexecuted_blocks=1 00:18:02.798 00:18:02.798 ' 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:02.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.798 --rc genhtml_branch_coverage=1 00:18:02.798 --rc genhtml_function_coverage=1 00:18:02.798 --rc genhtml_legend=1 00:18:02.798 --rc geninfo_all_blocks=1 00:18:02.798 --rc geninfo_unexecuted_blocks=1 00:18:02.798 00:18:02.798 ' 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:02.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.798 --rc genhtml_branch_coverage=1 00:18:02.798 --rc genhtml_function_coverage=1 00:18:02.798 --rc genhtml_legend=1 00:18:02.798 --rc geninfo_all_blocks=1 00:18:02.798 --rc geninfo_unexecuted_blocks=1 00:18:02.798 00:18:02.798 ' 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.798 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.799 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:09.369 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.369 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:09.370 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:09.370 Found net devices under 0000:af:00.0: cvl_0_0 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:09.370 Found net devices under 0000:af:00.1: cvl_0_1 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:09.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:18:09.370 00:18:09.370 --- 10.0.0.2 ping statistics --- 00:18:09.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.370 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:18:09.370 00:18:09.370 --- 10.0.0.1 ping statistics --- 00:18:09.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.370 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1732335 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1732335 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1732335 ']' 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.370 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.370 [2024-12-06 11:19:41.780002] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:09.370 [2024-12-06 11:19:41.780055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.370 [2024-12-06 11:19:41.857193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.370 [2024-12-06 11:19:41.895995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.370 [2024-12-06 11:19:41.896026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.370 [2024-12-06 11:19:41.896033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.370 [2024-12-06 11:19:41.896039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.370 [2024-12-06 11:19:41.896044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.370 [2024-12-06 11:19:41.896567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:09.938 true 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:09.938 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:10.197 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:10.197 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:10.197 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:10.456 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.456 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:10.456 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:10.456 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:10.456 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:10.715 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.715 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:10.975 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:10.975 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:10.975 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.975 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:10.975 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:10.975 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:10.975 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:11.234 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:11.234 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:11.493 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:11.493 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:11.493 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:11.493 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:11.493 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:11.752 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.wk1GYd8JNr 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.KGZksruLEB 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.wk1GYd8JNr 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.KGZksruLEB 00:18:11.753 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:12.011 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:12.270 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.wk1GYd8JNr 00:18:12.270 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wk1GYd8JNr 00:18:12.270 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.270 [2024-12-06 11:19:45.187640] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.270 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.528 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:12.786 [2024-12-06 11:19:45.548555] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.786 [2024-12-06 11:19:45.548764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.786 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.786 malloc0 00:18:13.044 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:13.044 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wk1GYd8JNr 00:18:13.302 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.560 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.wk1GYd8JNr 00:18:23.538 Initializing NVMe Controllers 00:18:23.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:23.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:23.538 Initialization complete. Launching workers. 00:18:23.538 ======================================================== 00:18:23.538 Latency(us) 00:18:23.538 Device Information : IOPS MiB/s Average min max 00:18:23.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18288.97 71.44 3499.45 727.16 5155.97 00:18:23.538 ======================================================== 00:18:23.538 Total : 18288.97 71.44 3499.45 727.16 5155.97 00:18:23.538 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wk1GYd8JNr 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wk1GYd8JNr 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1734942 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1734942 /var/tmp/bdevperf.sock 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1734942 ']' 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.538 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.538 [2024-12-06 11:19:56.396877] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:23.538 [2024-12-06 11:19:56.396925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734942 ] 00:18:23.538 [2024-12-06 11:19:56.469972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.798 [2024-12-06 11:19:56.509172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.798 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.798 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:23.798 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wk1GYd8JNr 00:18:24.058 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.058 [2024-12-06 11:19:56.940848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.317 TLSTESTn1 00:18:24.317 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:24.317 Running I/O for 10 seconds... 00:18:26.187 5696.00 IOPS, 22.25 MiB/s [2024-12-06T10:20:00.504Z] 5815.50 IOPS, 22.72 MiB/s [2024-12-06T10:20:01.440Z] 5778.33 IOPS, 22.57 MiB/s [2024-12-06T10:20:02.376Z] 5755.00 IOPS, 22.48 MiB/s [2024-12-06T10:20:03.314Z] 5771.00 IOPS, 22.54 MiB/s [2024-12-06T10:20:04.252Z] 5772.50 IOPS, 22.55 MiB/s [2024-12-06T10:20:05.191Z] 5808.29 IOPS, 22.69 MiB/s [2024-12-06T10:20:06.136Z] 5718.12 IOPS, 22.34 MiB/s [2024-12-06T10:20:07.513Z] 5664.11 IOPS, 22.13 MiB/s [2024-12-06T10:20:07.513Z] 5490.20 IOPS, 21.45 MiB/s 00:18:34.575 Latency(us) 00:18:34.575 [2024-12-06T10:20:07.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.575 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:34.575 Verification LBA range: start 0x0 length 0x2000 00:18:34.575 TLSTESTn1 : 10.02 5494.03 21.46 0.00 0.00 23264.12 4230.05 228780.22 00:18:34.575 [2024-12-06T10:20:07.513Z] =================================================================================================================== 00:18:34.575 [2024-12-06T10:20:07.513Z] Total : 5494.03 21.46 0.00 0.00 23264.12 4230.05 228780.22 00:18:34.575 { 00:18:34.575 "results": [ 00:18:34.575 { 00:18:34.575 "job": "TLSTESTn1", 00:18:34.575 "core_mask": "0x4", 00:18:34.575 "workload": "verify", 00:18:34.575 "status": "finished", 00:18:34.575 "verify_range": { 00:18:34.576 "start": 0, 00:18:34.576 "length": 8192 00:18:34.576 }, 00:18:34.576 "queue_depth": 128, 00:18:34.576 "io_size": 4096, 00:18:34.576 "runtime": 10.016142, 00:18:34.576 "iops": 5494.031534297337, 00:18:34.576 "mibps": 21.461060680848973, 00:18:34.576 "io_failed": 0, 00:18:34.576 "io_timeout": 0, 00:18:34.576 "avg_latency_us": 23264.118567036552, 00:18:34.576 "min_latency_us": 4230.050909090909, 00:18:34.576 "max_latency_us": 228780.21818181817 00:18:34.576 } 00:18:34.576 ], 00:18:34.576 "core_count": 1 00:18:34.576 } 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1734942 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1734942 ']' 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1734942 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1734942 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1734942' 00:18:34.576 killing process with pid 1734942 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1734942 00:18:34.576 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.576 00:18:34.576 Latency(us) 00:18:34.576 [2024-12-06T10:20:07.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.576 [2024-12-06T10:20:07.514Z] =================================================================================================================== 00:18:34.576 [2024-12-06T10:20:07.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1734942 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KGZksruLEB 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KGZksruLEB 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KGZksruLEB 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KGZksruLEB 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1736854 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1736854 /var/tmp/bdevperf.sock 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1736854 ']' 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.576 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.576 [2024-12-06 11:20:07.433929] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:34.576 [2024-12-06 11:20:07.433973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736854 ] 00:18:34.576 [2024-12-06 11:20:07.505257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.835 [2024-12-06 11:20:07.540597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.835 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.835 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.835 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KGZksruLEB 00:18:35.094 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:35.094 [2024-12-06 11:20:07.975587] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.094 [2024-12-06 11:20:07.980262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:35.094 [2024-12-06 11:20:07.980879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd32b0 (107): Transport endpoint is not connected 00:18:35.094 [2024-12-06 11:20:07.981871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd32b0 (9): Bad file descriptor 00:18:35.094 [2024-12-06 11:20:07.982873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:35.094 [2024-12-06 11:20:07.982883] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:35.094 [2024-12-06 11:20:07.982889] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:35.094 [2024-12-06 11:20:07.982899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:35.094 request: 00:18:35.094 { 00:18:35.094 "name": "TLSTEST", 00:18:35.094 "trtype": "tcp", 00:18:35.094 "traddr": "10.0.0.2", 00:18:35.094 "adrfam": "ipv4", 00:18:35.094 "trsvcid": "4420", 00:18:35.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.094 "prchk_reftag": false, 00:18:35.094 "prchk_guard": false, 00:18:35.094 "hdgst": false, 00:18:35.094 "ddgst": false, 00:18:35.094 "psk": "key0", 00:18:35.094 "allow_unrecognized_csi": false, 00:18:35.094 "method": "bdev_nvme_attach_controller", 00:18:35.094 "req_id": 1 00:18:35.094 } 00:18:35.094 Got JSON-RPC error response 00:18:35.094 response: 00:18:35.094 { 00:18:35.094 "code": -5, 00:18:35.094 "message": "Input/output error" 00:18:35.094 } 00:18:35.094 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1736854 00:18:35.094 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1736854 ']' 00:18:35.095 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1736854 00:18:35.095 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:35.095 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.095 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736854 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736854' 00:18:35.354 killing process with pid 1736854 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1736854 00:18:35.354 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.354 00:18:35.354 Latency(us) 00:18:35.354 [2024-12-06T10:20:08.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.354 [2024-12-06T10:20:08.292Z] =================================================================================================================== 00:18:35.354 [2024-12-06T10:20:08.292Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1736854 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wk1GYd8JNr 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wk1GYd8JNr 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wk1GYd8JNr 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wk1GYd8JNr 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1737022 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1737022 /var/tmp/bdevperf.sock 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1737022 ']' 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.354 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.354 [2024-12-06 11:20:08.261503] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:35.354 [2024-12-06 11:20:08.261549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737022 ] 00:18:35.614 [2024-12-06 11:20:08.330802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.614 [2024-12-06 11:20:08.370246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.614 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.614 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.614 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wk1GYd8JNr 00:18:35.873 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:35.873 [2024-12-06 11:20:08.788617] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.873 [2024-12-06 11:20:08.793191] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:35.873 [2024-12-06 11:20:08.793213] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:35.873 [2024-12-06 11:20:08.793235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:35.873 [2024-12-06 11:20:08.793902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa82b0 (107): Transport endpoint is not connected 00:18:35.873 [2024-12-06 11:20:08.794895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa82b0 (9): Bad file descriptor 00:18:35.873 [2024-12-06 11:20:08.795896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:35.873 [2024-12-06 11:20:08.795908] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:35.873 [2024-12-06 11:20:08.795915] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:35.873 [2024-12-06 11:20:08.795924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:35.873 request: 00:18:35.873 { 00:18:35.873 "name": "TLSTEST", 00:18:35.873 "trtype": "tcp", 00:18:35.873 "traddr": "10.0.0.2", 00:18:35.873 "adrfam": "ipv4", 00:18:35.873 "trsvcid": "4420", 00:18:35.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.873 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:35.873 "prchk_reftag": false, 00:18:35.873 "prchk_guard": false, 00:18:35.873 "hdgst": false, 00:18:35.873 "ddgst": false, 00:18:35.873 "psk": "key0", 00:18:35.873 "allow_unrecognized_csi": false, 00:18:35.873 "method": "bdev_nvme_attach_controller", 00:18:35.874 "req_id": 1 00:18:35.874 } 00:18:35.874 Got JSON-RPC error response 00:18:35.874 response: 00:18:35.874 { 00:18:35.874 "code": -5, 00:18:35.874 "message": "Input/output error" 00:18:35.874 } 00:18:35.874 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1737022 00:18:35.874 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1737022 ']' 00:18:35.874 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1737022 00:18:36.188 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.188 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.188 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737022 00:18:36.188 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:36.188 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:36.188 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737022' 00:18:36.188 killing process with pid 1737022 00:18:36.188 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1737022 00:18:36.188 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.188 00:18:36.189 Latency(us) 00:18:36.189 [2024-12-06T10:20:09.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.189 [2024-12-06T10:20:09.127Z] =================================================================================================================== 00:18:36.189 [2024-12-06T10:20:09.127Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.189 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1737022 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wk1GYd8JNr 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wk1GYd8JNr 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wk1GYd8JNr 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wk1GYd8JNr 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1737138 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1737138 /var/tmp/bdevperf.sock 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1737138 ']' 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.189 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.189 [2024-12-06 11:20:09.063900] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:36.189 [2024-12-06 11:20:09.063944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737138 ] 00:18:36.508 [2024-12-06 11:20:09.137479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.508 [2024-12-06 11:20:09.175702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.102 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.102 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.102 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wk1GYd8JNr 00:18:37.361 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:37.361 [2024-12-06 11:20:10.220496] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.361 [2024-12-06 11:20:10.225222] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:37.361 [2024-12-06 11:20:10.225243] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:37.361 [2024-12-06 11:20:10.225267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:37.361 [2024-12-06 11:20:10.225905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16972b0 (107): Transport endpoint is not connected 00:18:37.361 [2024-12-06 11:20:10.226897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16972b0 (9): Bad file descriptor 00:18:37.361 [2024-12-06 11:20:10.227898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:37.361 [2024-12-06 11:20:10.227908] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:37.361 [2024-12-06 11:20:10.227920] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:37.361 [2024-12-06 11:20:10.227930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:37.361 request: 00:18:37.361 { 00:18:37.361 "name": "TLSTEST", 00:18:37.361 "trtype": "tcp", 00:18:37.361 "traddr": "10.0.0.2", 00:18:37.361 "adrfam": "ipv4", 00:18:37.361 "trsvcid": "4420", 00:18:37.361 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:37.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.361 "prchk_reftag": false, 00:18:37.361 "prchk_guard": false, 00:18:37.361 "hdgst": false, 00:18:37.361 "ddgst": false, 00:18:37.361 "psk": "key0", 00:18:37.361 "allow_unrecognized_csi": false, 00:18:37.361 "method": "bdev_nvme_attach_controller", 00:18:37.361 "req_id": 1 00:18:37.361 } 00:18:37.361 Got JSON-RPC error response 00:18:37.361 response: 00:18:37.361 { 00:18:37.361 "code": -5, 00:18:37.361 "message": "Input/output error" 00:18:37.361 } 00:18:37.361 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1737138 00:18:37.361 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1737138 ']' 00:18:37.361 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1737138 00:18:37.361 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:37.362 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.362 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737138 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737138' 00:18:37.621 killing process with pid 1737138 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1737138 00:18:37.621 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.621 00:18:37.621 Latency(us) 00:18:37.621 [2024-12-06T10:20:10.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.621 [2024-12-06T10:20:10.559Z] =================================================================================================================== 00:18:37.621 [2024-12-06T10:20:10.559Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1737138 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1737417 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1737417 /var/tmp/bdevperf.sock 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1737417 ']' 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.621 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.621 [2024-12-06 11:20:10.506493] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:37.621 [2024-12-06 11:20:10.506544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737417 ] 00:18:37.880 [2024-12-06 11:20:10.577266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.880 [2024-12-06 11:20:10.613221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.880 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.880 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.880 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:38.140 [2024-12-06 11:20:10.864044] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:38.140 [2024-12-06 11:20:10.864079] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:38.140 request: 00:18:38.140 { 00:18:38.140 "name": "key0", 00:18:38.140 "path": "", 00:18:38.140 "method": "keyring_file_add_key", 00:18:38.140 "req_id": 1 00:18:38.140 } 00:18:38.140 Got JSON-RPC error response 00:18:38.140 response: 00:18:38.140 { 00:18:38.140 "code": -1, 00:18:38.140 "message": "Operation not permitted" 00:18:38.140 } 00:18:38.140 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.140 [2024-12-06 11:20:11.052611] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.140 [2024-12-06 11:20:11.052638] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:38.140 request: 00:18:38.140 { 00:18:38.140 "name": "TLSTEST", 00:18:38.140 "trtype": "tcp", 00:18:38.140 "traddr": "10.0.0.2", 00:18:38.140 "adrfam": "ipv4", 00:18:38.140 "trsvcid": "4420", 00:18:38.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.140 "prchk_reftag": false, 00:18:38.140 "prchk_guard": false, 00:18:38.140 "hdgst": false, 00:18:38.140 "ddgst": false, 00:18:38.140 "psk": "key0", 00:18:38.140 "allow_unrecognized_csi": false, 00:18:38.140 "method": "bdev_nvme_attach_controller", 00:18:38.140 "req_id": 1 00:18:38.140 } 00:18:38.140 Got JSON-RPC error response 00:18:38.140 response: 00:18:38.140 { 00:18:38.140 "code": -126, 00:18:38.140 "message": "Required key not available" 00:18:38.140 } 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1737417 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1737417 ']' 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1737417 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737417 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737417' 00:18:38.399 killing process with pid 1737417 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1737417 00:18:38.399 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.399 00:18:38.399 Latency(us) 00:18:38.399 [2024-12-06T10:20:11.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.399 [2024-12-06T10:20:11.337Z] =================================================================================================================== 00:18:38.399 [2024-12-06T10:20:11.337Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1737417 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1732335 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1732335 ']' 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1732335 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.399 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1732335 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1732335' 00:18:38.658 killing process with pid 1732335 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1732335 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1732335 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.hYEkwJb2VO 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.hYEkwJb2VO 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1737697 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1737697 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1737697 ']' 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.658 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.916 [2024-12-06 11:20:11.608080] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:38.916 [2024-12-06 11:20:11.608123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.916 [2024-12-06 11:20:11.682842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.916 [2024-12-06 11:20:11.720496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.916 [2024-12-06 11:20:11.720530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.916 [2024-12-06 11:20:11.720536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.916 [2024-12-06 11:20:11.720541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.916 [2024-12-06 11:20:11.720546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.916 [2024-12-06 11:20:11.721066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.849 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.849 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.849 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.849 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.849 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.849 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.849 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.hYEkwJb2VO 00:18:39.849 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hYEkwJb2VO 00:18:39.850 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:39.850 [2024-12-06 11:20:12.622004] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.850 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:40.107 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:40.107 [2024-12-06 11:20:12.986968] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.107 [2024-12-06 11:20:12.987168] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.107 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:40.365 malloc0 00:18:40.365 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:40.622 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hYEkwJb2VO 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hYEkwJb2VO 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hYEkwJb2VO 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1737998 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1737998 /var/tmp/bdevperf.sock 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1737998 ']' 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.879 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.879 [2024-12-06 11:20:13.791652] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:40.879 [2024-12-06 11:20:13.791698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737998 ] 00:18:41.137 [2024-12-06 11:20:13.862233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.137 [2024-12-06 11:20:13.900458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.137 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.137 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.137 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hYEkwJb2VO 00:18:41.394 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:41.650 [2024-12-06 11:20:14.359193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.650 TLSTESTn1 00:18:41.650 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:41.650 Running I/O for 10 seconds... 00:18:43.964 5671.00 IOPS, 22.15 MiB/s [2024-12-06T10:20:17.839Z] 5736.00 IOPS, 22.41 MiB/s [2024-12-06T10:20:18.777Z] 5826.67 IOPS, 22.76 MiB/s [2024-12-06T10:20:19.714Z] 5835.50 IOPS, 22.79 MiB/s [2024-12-06T10:20:20.649Z] 5840.80 IOPS, 22.82 MiB/s [2024-12-06T10:20:21.584Z] 5841.17 IOPS, 22.82 MiB/s [2024-12-06T10:20:22.962Z] 5826.57 IOPS, 22.76 MiB/s [2024-12-06T10:20:23.897Z] 5828.88 IOPS, 22.77 MiB/s [2024-12-06T10:20:24.833Z] 5841.56 IOPS, 22.82 MiB/s [2024-12-06T10:20:24.833Z] 5827.40 IOPS, 22.76 MiB/s 00:18:51.895 Latency(us) 00:18:51.895 [2024-12-06T10:20:24.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.895 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:51.895 Verification LBA range: start 0x0 length 0x2000 00:18:51.895 TLSTESTn1 : 10.02 5829.03 22.77 0.00 0.00 21924.05 6285.50 24188.74 00:18:51.895 [2024-12-06T10:20:24.833Z] =================================================================================================================== 00:18:51.895 [2024-12-06T10:20:24.833Z] Total : 5829.03 22.77 0.00 0.00 21924.05 6285.50 24188.74 00:18:51.895 { 00:18:51.895 "results": [ 00:18:51.895 { 00:18:51.895 "job": "TLSTESTn1", 00:18:51.895 "core_mask": "0x4", 00:18:51.895 "workload": "verify", 00:18:51.895 "status": "finished", 00:18:51.895 "verify_range": { 00:18:51.895 "start": 0, 00:18:51.895 "length": 8192 00:18:51.895 }, 00:18:51.895 "queue_depth": 128, 00:18:51.895 "io_size": 4096, 00:18:51.895 "runtime": 10.018999, 00:18:51.895 "iops": 5829.025434576847, 00:18:51.895 "mibps": 22.76963060381581, 00:18:51.895 "io_failed": 0, 00:18:51.895 "io_timeout": 0, 00:18:51.895 "avg_latency_us": 21924.052463609743, 00:18:51.895 "min_latency_us": 6285.498181818181, 00:18:51.895 "max_latency_us": 24188.741818181818 00:18:51.895 } 00:18:51.895 ], 00:18:51.895 "core_count": 1 00:18:51.895 } 00:18:51.895 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:51.895 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1737998 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1737998 ']' 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1737998 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737998 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737998' 00:18:51.896 killing process with pid 1737998 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1737998 00:18:51.896 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.896 00:18:51.896 Latency(us) 00:18:51.896 [2024-12-06T10:20:24.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.896 [2024-12-06T10:20:24.834Z] =================================================================================================================== 00:18:51.896 [2024-12-06T10:20:24.834Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1737998 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.hYEkwJb2VO 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hYEkwJb2VO 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hYEkwJb2VO 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hYEkwJb2VO 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hYEkwJb2VO 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1740076 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1740076 /var/tmp/bdevperf.sock 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1740076 ']' 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.896 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.155 [2024-12-06 11:20:24.867461] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:52.155 [2024-12-06 11:20:24.867503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740076 ] 00:18:52.155 [2024-12-06 11:20:24.943770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.155 [2024-12-06 11:20:24.978916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.155 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.155 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:52.155 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hYEkwJb2VO 00:18:52.411 [2024-12-06 11:20:25.229683] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.hYEkwJb2VO': 0100666 00:18:52.411 [2024-12-06 11:20:25.229716] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:52.411 request: 00:18:52.411 { 00:18:52.411 "name": "key0", 00:18:52.411 "path": "/tmp/tmp.hYEkwJb2VO", 00:18:52.411 "method": "keyring_file_add_key", 00:18:52.411 "req_id": 1 00:18:52.411 } 00:18:52.411 Got JSON-RPC error response 00:18:52.411 response: 00:18:52.412 { 00:18:52.412 "code": -1, 00:18:52.412 "message": "Operation not permitted" 00:18:52.412 } 00:18:52.412 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.669 [2024-12-06 11:20:25.410224] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.669 [2024-12-06 11:20:25.410250] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:52.669 request: 00:18:52.669 { 00:18:52.669 "name": "TLSTEST", 00:18:52.669 "trtype": "tcp", 00:18:52.669 "traddr": "10.0.0.2", 00:18:52.669 "adrfam": "ipv4", 00:18:52.669 "trsvcid": "4420", 00:18:52.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.669 "prchk_reftag": false, 00:18:52.669 "prchk_guard": false, 00:18:52.669 "hdgst": false, 00:18:52.669 "ddgst": false, 00:18:52.669 "psk": "key0", 00:18:52.669 "allow_unrecognized_csi": false, 00:18:52.669 "method": "bdev_nvme_attach_controller", 00:18:52.669 "req_id": 1 00:18:52.669 } 00:18:52.669 Got JSON-RPC error response 00:18:52.669 response: 00:18:52.669 { 00:18:52.669 "code": -126, 00:18:52.669 "message": "Required key not available" 00:18:52.669 } 00:18:52.669 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1740076 00:18:52.669 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1740076 ']' 00:18:52.669 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1740076 00:18:52.669 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.669 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.669 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740076 00:18:52.669 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:52.669 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:52.670 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740076' 00:18:52.670 killing process with pid 1740076 00:18:52.670 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1740076 00:18:52.670 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.670 00:18:52.670 Latency(us) 00:18:52.670 [2024-12-06T10:20:25.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.670 [2024-12-06T10:20:25.608Z] =================================================================================================================== 00:18:52.670 [2024-12-06T10:20:25.608Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.670 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1740076 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1737697 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1737697 ']' 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1737697 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737697 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737697' 00:18:52.928 killing process with pid 1737697 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1737697 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1737697 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1740137 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1740137 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1740137 ']' 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.928 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.186 [2024-12-06 11:20:25.899526] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:53.186 [2024-12-06 11:20:25.899568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.186 [2024-12-06 11:20:25.977096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.186 [2024-12-06 11:20:26.012357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.186 [2024-12-06 11:20:26.012391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.186 [2024-12-06 11:20:26.012397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.186 [2024-12-06 11:20:26.012402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.186 [2024-12-06 11:20:26.012408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.186 [2024-12-06 11:20:26.012920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.186 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.186 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.186 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.186 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.186 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.hYEkwJb2VO 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.hYEkwJb2VO 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.hYEkwJb2VO 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hYEkwJb2VO 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:53.444 [2024-12-06 11:20:26.320467] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.444 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:53.702 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:53.960 [2024-12-06 11:20:26.665359] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.960 [2024-12-06 11:20:26.665564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.960 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:53.960 malloc0 00:18:53.960 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:54.219 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hYEkwJb2VO 00:18:54.478 [2024-12-06 11:20:27.190603] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.hYEkwJb2VO': 0100666 00:18:54.478 [2024-12-06 11:20:27.190631] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:54.478 request: 00:18:54.478 { 00:18:54.478 "name": "key0", 00:18:54.478 "path": "/tmp/tmp.hYEkwJb2VO", 00:18:54.478 "method": "keyring_file_add_key", 00:18:54.478 "req_id": 1 00:18:54.478 } 00:18:54.478 Got JSON-RPC error response 00:18:54.478 response: 00:18:54.478 { 00:18:54.478 "code": -1, 00:18:54.478 "message": "Operation not permitted" 00:18:54.478 } 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.478 [2024-12-06 11:20:27.363074] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:54.478 [2024-12-06 11:20:27.363106] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:54.478 request: 00:18:54.478 { 00:18:54.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.478 "host": "nqn.2016-06.io.spdk:host1", 00:18:54.478 "psk": "key0", 00:18:54.478 "method": "nvmf_subsystem_add_host", 00:18:54.478 "req_id": 1 00:18:54.478 } 00:18:54.478 Got JSON-RPC error response 00:18:54.478 response: 00:18:54.478 { 00:18:54.478 "code": -32603, 00:18:54.478 "message": "Internal error" 00:18:54.478 } 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1740137 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1740137 ']' 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1740137 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.478 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740137 00:18:54.736 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:54.736 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:54.736 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740137' 00:18:54.736 killing process with pid 1740137 00:18:54.736 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1740137 00:18:54.736 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1740137 00:18:54.736 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.hYEkwJb2VO 00:18:54.736 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1740581 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1740581 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1740581 ']' 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.737 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.737 [2024-12-06 11:20:27.650193] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:54.737 [2024-12-06 11:20:27.650239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.996 [2024-12-06 11:20:27.719016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.996 [2024-12-06 11:20:27.756871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.996 [2024-12-06 11:20:27.756909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.996 [2024-12-06 11:20:27.756919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.997 [2024-12-06 11:20:27.756924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.997 [2024-12-06 11:20:27.756929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.997 [2024-12-06 11:20:27.757492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.997 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.997 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.997 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.997 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.997 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.997 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.997 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.hYEkwJb2VO 00:18:54.997 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hYEkwJb2VO 00:18:54.997 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:55.256 [2024-12-06 11:20:28.048436] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.256 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:55.516 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:55.516 [2024-12-06 11:20:28.413367] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.516 [2024-12-06 11:20:28.413549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.516 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:55.775 malloc0 00:18:55.775 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:56.035 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hYEkwJb2VO 00:18:56.294 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1740924 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1740924 /var/tmp/bdevperf.sock 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1740924 ']' 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.294 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.294 [2024-12-06 11:20:29.210364] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:56.295 [2024-12-06 11:20:29.210407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740924 ] 00:18:56.554 [2024-12-06 11:20:29.280421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.554 [2024-12-06 11:20:29.317976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.554 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.554 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.554 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hYEkwJb2VO 00:18:56.813 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:56.813 [2024-12-06 11:20:29.744401] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.071 TLSTESTn1 00:18:57.071 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:57.330 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:57.330 "subsystems": [ 00:18:57.330 { 00:18:57.330 "subsystem": "keyring", 00:18:57.330 "config": [ 00:18:57.330 { 00:18:57.330 "method": "keyring_file_add_key", 00:18:57.330 "params": { 00:18:57.330 "name": "key0", 00:18:57.330 "path": "/tmp/tmp.hYEkwJb2VO" 00:18:57.330 } 00:18:57.330 } 00:18:57.330 ] 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "subsystem": "iobuf", 00:18:57.330 "config": [ 00:18:57.330 { 00:18:57.330 "method": "iobuf_set_options", 00:18:57.330 "params": { 00:18:57.330 "small_pool_count": 8192, 00:18:57.330 "large_pool_count": 1024, 00:18:57.330 "small_bufsize": 8192, 00:18:57.330 "large_bufsize": 135168, 00:18:57.330 "enable_numa": false 00:18:57.330 } 00:18:57.330 } 00:18:57.330 ] 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "subsystem": "sock", 00:18:57.330 "config": [ 00:18:57.330 { 00:18:57.330 "method": "sock_set_default_impl", 00:18:57.330 "params": { 00:18:57.330 "impl_name": "posix" 00:18:57.330 } 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "method": "sock_impl_set_options", 00:18:57.330 "params": { 00:18:57.330 "impl_name": "ssl", 00:18:57.330 "recv_buf_size": 4096, 00:18:57.330 "send_buf_size": 4096, 00:18:57.330 "enable_recv_pipe": true, 00:18:57.330 "enable_quickack": false, 00:18:57.330 "enable_placement_id": 0, 00:18:57.330 "enable_zerocopy_send_server": true, 00:18:57.330 "enable_zerocopy_send_client": false, 00:18:57.330 "zerocopy_threshold": 0, 00:18:57.330 "tls_version": 0, 00:18:57.330 "enable_ktls": false 00:18:57.330 } 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "method": "sock_impl_set_options", 00:18:57.330 "params": { 00:18:57.330 "impl_name": "posix", 00:18:57.330 "recv_buf_size": 2097152, 00:18:57.330 "send_buf_size": 2097152, 00:18:57.330 "enable_recv_pipe": true, 00:18:57.330 "enable_quickack": false, 00:18:57.330 "enable_placement_id": 0, 00:18:57.330 "enable_zerocopy_send_server": true, 00:18:57.330 "enable_zerocopy_send_client": false, 00:18:57.330 "zerocopy_threshold": 0, 00:18:57.330 "tls_version": 0, 00:18:57.330 "enable_ktls": false 00:18:57.330 } 00:18:57.330 } 00:18:57.330 ] 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "subsystem": "vmd", 00:18:57.330 "config": [] 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "subsystem": "accel", 00:18:57.330 "config": [ 00:18:57.330 { 00:18:57.330 "method": "accel_set_options", 00:18:57.330 "params": { 00:18:57.330 "small_cache_size": 128, 00:18:57.330 "large_cache_size": 16, 00:18:57.330 "task_count": 2048, 00:18:57.330 "sequence_count": 2048, 00:18:57.330 "buf_count": 2048 00:18:57.330 } 00:18:57.330 } 00:18:57.330 ] 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "subsystem": "bdev", 00:18:57.330 "config": [ 00:18:57.330 { 00:18:57.330 "method": "bdev_set_options", 00:18:57.330 "params": { 00:18:57.330 "bdev_io_pool_size": 65535, 00:18:57.330 "bdev_io_cache_size": 256, 00:18:57.330 "bdev_auto_examine": true, 00:18:57.330 "iobuf_small_cache_size": 128, 00:18:57.330 "iobuf_large_cache_size": 16 00:18:57.330 } 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "method": "bdev_raid_set_options", 00:18:57.330 "params": { 00:18:57.330 "process_window_size_kb": 1024, 00:18:57.330 "process_max_bandwidth_mb_sec": 0 00:18:57.330 } 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "method": "bdev_iscsi_set_options", 00:18:57.330 "params": { 00:18:57.330 "timeout_sec": 30 00:18:57.330 } 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "method": "bdev_nvme_set_options", 00:18:57.330 "params": { 00:18:57.330 "action_on_timeout": "none", 00:18:57.330 "timeout_us": 0, 00:18:57.330 "timeout_admin_us": 0, 00:18:57.330 "keep_alive_timeout_ms": 10000, 00:18:57.330 "arbitration_burst": 0, 00:18:57.330 "low_priority_weight": 0, 00:18:57.330 "medium_priority_weight": 0, 00:18:57.330 "high_priority_weight": 0, 00:18:57.330 "nvme_adminq_poll_period_us": 10000, 00:18:57.330 "nvme_ioq_poll_period_us": 0, 00:18:57.330 "io_queue_requests": 0, 00:18:57.330 "delay_cmd_submit": true, 00:18:57.330 "transport_retry_count": 4, 00:18:57.330 "bdev_retry_count": 3, 00:18:57.330 "transport_ack_timeout": 0, 00:18:57.330 "ctrlr_loss_timeout_sec": 0, 00:18:57.330 "reconnect_delay_sec": 0, 00:18:57.330 "fast_io_fail_timeout_sec": 0, 00:18:57.330 "disable_auto_failback": false, 00:18:57.330 "generate_uuids": false, 00:18:57.330 "transport_tos": 0, 00:18:57.330 "nvme_error_stat": false, 00:18:57.330 "rdma_srq_size": 0, 00:18:57.330 "io_path_stat": false, 00:18:57.330 "allow_accel_sequence": false, 00:18:57.330 "rdma_max_cq_size": 0, 00:18:57.330 "rdma_cm_event_timeout_ms": 0, 00:18:57.330 "dhchap_digests": [ 00:18:57.330 "sha256", 00:18:57.330 "sha384", 00:18:57.330 "sha512" 00:18:57.330 ], 00:18:57.330 "dhchap_dhgroups": [ 00:18:57.330 "null", 00:18:57.330 "ffdhe2048", 00:18:57.330 "ffdhe3072", 00:18:57.330 "ffdhe4096", 00:18:57.330 "ffdhe6144", 00:18:57.330 "ffdhe8192" 00:18:57.330 ] 00:18:57.330 } 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "method": "bdev_nvme_set_hotplug", 00:18:57.330 "params": { 00:18:57.330 "period_us": 100000, 00:18:57.330 "enable": false 00:18:57.330 } 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "method": "bdev_malloc_create", 00:18:57.330 "params": { 00:18:57.330 "name": "malloc0", 00:18:57.330 "num_blocks": 8192, 00:18:57.330 "block_size": 4096, 00:18:57.330 "physical_block_size": 4096, 00:18:57.330 "uuid": "8661b44b-d994-41d1-97ff-fbcef50fd679", 00:18:57.330 "optimal_io_boundary": 0, 00:18:57.330 "md_size": 0, 00:18:57.330 "dif_type": 0, 00:18:57.330 "dif_is_head_of_md": false, 00:18:57.330 "dif_pi_format": 0 00:18:57.330 } 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "method": "bdev_wait_for_examine" 00:18:57.330 } 00:18:57.330 ] 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "subsystem": "nbd", 00:18:57.330 "config": [] 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "subsystem": "scheduler", 00:18:57.330 "config": [ 00:18:57.330 { 00:18:57.330 "method": "framework_set_scheduler", 00:18:57.330 "params": { 00:18:57.330 "name": "static" 00:18:57.330 } 00:18:57.330 } 00:18:57.330 ] 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "subsystem": "nvmf", 00:18:57.330 "config": [ 00:18:57.330 { 00:18:57.330 "method": "nvmf_set_config", 00:18:57.330 "params": { 00:18:57.330 "discovery_filter": "match_any", 00:18:57.330 "admin_cmd_passthru": { 00:18:57.330 "identify_ctrlr": false 00:18:57.330 }, 00:18:57.330 "dhchap_digests": [ 00:18:57.330 "sha256", 00:18:57.330 "sha384", 00:18:57.330 "sha512" 00:18:57.330 ], 00:18:57.330 "dhchap_dhgroups": [ 00:18:57.330 "null", 00:18:57.330 "ffdhe2048", 00:18:57.330 "ffdhe3072", 00:18:57.330 "ffdhe4096", 00:18:57.330 "ffdhe6144", 00:18:57.330 "ffdhe8192" 00:18:57.330 ] 00:18:57.330 } 00:18:57.330 }, 00:18:57.330 { 00:18:57.330 "method": "nvmf_set_max_subsystems", 00:18:57.330 "params": { 00:18:57.331 "max_subsystems": 1024 00:18:57.331 } 00:18:57.331 }, 00:18:57.331 { 00:18:57.331 "method": "nvmf_set_crdt", 00:18:57.331 "params": { 00:18:57.331 "crdt1": 0, 00:18:57.331 "crdt2": 0, 00:18:57.331 "crdt3": 0 00:18:57.331 } 00:18:57.331 }, 00:18:57.331 { 00:18:57.331 "method": "nvmf_create_transport", 00:18:57.331 "params": { 00:18:57.331 "trtype": "TCP", 00:18:57.331 "max_queue_depth": 128, 00:18:57.331 "max_io_qpairs_per_ctrlr": 127, 00:18:57.331 "in_capsule_data_size": 4096, 00:18:57.331 "max_io_size": 131072, 00:18:57.331 "io_unit_size": 131072, 00:18:57.331 "max_aq_depth": 128, 00:18:57.331 "num_shared_buffers": 511, 00:18:57.331 "buf_cache_size": 4294967295, 00:18:57.331 "dif_insert_or_strip": false, 00:18:57.331 "zcopy": false, 00:18:57.331 "c2h_success": false, 00:18:57.331 "sock_priority": 0, 00:18:57.331 "abort_timeout_sec": 1, 00:18:57.331 "ack_timeout": 0, 00:18:57.331 "data_wr_pool_size": 0 00:18:57.331 } 00:18:57.331 }, 00:18:57.331 { 00:18:57.331 "method": "nvmf_create_subsystem", 00:18:57.331 "params": { 00:18:57.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.331 "allow_any_host": false, 00:18:57.331 "serial_number": "SPDK00000000000001", 00:18:57.331 "model_number": "SPDK bdev Controller", 00:18:57.331 "max_namespaces": 10, 00:18:57.331 "min_cntlid": 1, 00:18:57.331 "max_cntlid": 65519, 00:18:57.331 "ana_reporting": false 00:18:57.331 } 00:18:57.331 }, 00:18:57.331 { 00:18:57.331 "method": "nvmf_subsystem_add_host", 00:18:57.331 "params": { 00:18:57.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.331 "host": "nqn.2016-06.io.spdk:host1", 00:18:57.331 "psk": "key0" 00:18:57.331 } 00:18:57.331 }, 00:18:57.331 { 00:18:57.331 "method": "nvmf_subsystem_add_ns", 00:18:57.331 "params": { 00:18:57.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.331 "namespace": { 00:18:57.331 "nsid": 1, 00:18:57.331 "bdev_name": "malloc0", 00:18:57.331 "nguid": "8661B44BD99441D197FFFBCEF50FD679", 00:18:57.331 "uuid": "8661b44b-d994-41d1-97ff-fbcef50fd679", 00:18:57.331 "no_auto_visible": false 00:18:57.331 } 00:18:57.331 } 00:18:57.331 }, 00:18:57.331 { 00:18:57.331 "method": "nvmf_subsystem_add_listener", 00:18:57.331 "params": { 00:18:57.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.331 "listen_address": { 00:18:57.331 "trtype": "TCP", 00:18:57.331 "adrfam": "IPv4", 00:18:57.331 "traddr": "10.0.0.2", 00:18:57.331 "trsvcid": "4420" 00:18:57.331 }, 00:18:57.331 "secure_channel": true 00:18:57.331 } 00:18:57.331 } 00:18:57.331 ] 00:18:57.331 } 00:18:57.331 ] 00:18:57.331 }' 00:18:57.331 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:57.591 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:57.591 "subsystems": [ 00:18:57.591 { 00:18:57.591 "subsystem": "keyring", 00:18:57.591 "config": [ 00:18:57.591 { 00:18:57.591 "method": "keyring_file_add_key", 00:18:57.591 "params": { 00:18:57.591 "name": "key0", 00:18:57.591 "path": "/tmp/tmp.hYEkwJb2VO" 00:18:57.591 } 00:18:57.591 } 00:18:57.591 ] 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "subsystem": "iobuf", 00:18:57.591 "config": [ 00:18:57.591 { 00:18:57.591 "method": "iobuf_set_options", 00:18:57.591 "params": { 00:18:57.591 "small_pool_count": 8192, 00:18:57.591 "large_pool_count": 1024, 00:18:57.591 "small_bufsize": 8192, 00:18:57.591 "large_bufsize": 135168, 00:18:57.591 "enable_numa": false 00:18:57.591 } 00:18:57.591 } 00:18:57.591 ] 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "subsystem": "sock", 00:18:57.591 "config": [ 00:18:57.591 { 00:18:57.591 "method": "sock_set_default_impl", 00:18:57.591 "params": { 00:18:57.591 "impl_name": "posix" 00:18:57.591 } 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "method": "sock_impl_set_options", 00:18:57.591 "params": { 00:18:57.591 "impl_name": "ssl", 00:18:57.591 "recv_buf_size": 4096, 00:18:57.591 "send_buf_size": 4096, 00:18:57.591 "enable_recv_pipe": true, 00:18:57.591 "enable_quickack": false, 00:18:57.591 "enable_placement_id": 0, 00:18:57.591 "enable_zerocopy_send_server": true, 00:18:57.591 "enable_zerocopy_send_client": false, 00:18:57.591 "zerocopy_threshold": 0, 00:18:57.591 "tls_version": 0, 00:18:57.591 "enable_ktls": false 00:18:57.591 } 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "method": "sock_impl_set_options", 00:18:57.591 "params": { 00:18:57.591 "impl_name": "posix", 00:18:57.591 "recv_buf_size": 2097152, 00:18:57.591 "send_buf_size": 2097152, 00:18:57.591 "enable_recv_pipe": true, 00:18:57.591 "enable_quickack": false, 00:18:57.591 "enable_placement_id": 0, 00:18:57.591 "enable_zerocopy_send_server": true, 00:18:57.591 "enable_zerocopy_send_client": false, 00:18:57.591 "zerocopy_threshold": 0, 00:18:57.591 "tls_version": 0, 00:18:57.591 "enable_ktls": false 00:18:57.591 } 00:18:57.591 } 00:18:57.591 ] 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "subsystem": "vmd", 00:18:57.591 "config": [] 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "subsystem": "accel", 00:18:57.591 "config": [ 00:18:57.591 { 00:18:57.591 "method": "accel_set_options", 00:18:57.591 "params": { 00:18:57.591 "small_cache_size": 128, 00:18:57.591 "large_cache_size": 16, 00:18:57.591 "task_count": 2048, 00:18:57.591 "sequence_count": 2048, 00:18:57.591 "buf_count": 2048 00:18:57.591 } 00:18:57.591 } 00:18:57.591 ] 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "subsystem": "bdev", 00:18:57.591 "config": [ 00:18:57.591 { 00:18:57.591 "method": "bdev_set_options", 00:18:57.591 "params": { 00:18:57.591 "bdev_io_pool_size": 65535, 00:18:57.591 "bdev_io_cache_size": 256, 00:18:57.591 "bdev_auto_examine": true, 00:18:57.591 "iobuf_small_cache_size": 128, 00:18:57.591 "iobuf_large_cache_size": 16 00:18:57.591 } 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "method": "bdev_raid_set_options", 00:18:57.591 "params": { 00:18:57.591 "process_window_size_kb": 1024, 00:18:57.591 "process_max_bandwidth_mb_sec": 0 00:18:57.591 } 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "method": "bdev_iscsi_set_options", 00:18:57.591 "params": { 00:18:57.591 "timeout_sec": 30 00:18:57.591 } 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "method": "bdev_nvme_set_options", 00:18:57.591 "params": { 00:18:57.591 "action_on_timeout": "none", 00:18:57.591 "timeout_us": 0, 00:18:57.591 "timeout_admin_us": 0, 00:18:57.591 "keep_alive_timeout_ms": 10000, 00:18:57.591 "arbitration_burst": 0, 00:18:57.591 "low_priority_weight": 0, 00:18:57.591 "medium_priority_weight": 0, 00:18:57.591 "high_priority_weight": 0, 00:18:57.591 "nvme_adminq_poll_period_us": 10000, 00:18:57.591 "nvme_ioq_poll_period_us": 0, 00:18:57.591 "io_queue_requests": 512, 00:18:57.591 "delay_cmd_submit": true, 00:18:57.591 "transport_retry_count": 4, 00:18:57.591 "bdev_retry_count": 3, 00:18:57.591 "transport_ack_timeout": 0, 00:18:57.591 "ctrlr_loss_timeout_sec": 0, 00:18:57.591 "reconnect_delay_sec": 0, 00:18:57.591 "fast_io_fail_timeout_sec": 0, 00:18:57.591 "disable_auto_failback": false, 00:18:57.591 "generate_uuids": false, 00:18:57.591 "transport_tos": 0, 00:18:57.591 "nvme_error_stat": false, 00:18:57.591 "rdma_srq_size": 0, 00:18:57.591 "io_path_stat": false, 00:18:57.591 "allow_accel_sequence": false, 00:18:57.591 "rdma_max_cq_size": 0, 00:18:57.591 "rdma_cm_event_timeout_ms": 0, 00:18:57.591 "dhchap_digests": [ 00:18:57.591 "sha256", 00:18:57.591 "sha384", 00:18:57.591 "sha512" 00:18:57.591 ], 00:18:57.591 "dhchap_dhgroups": [ 00:18:57.591 "null", 00:18:57.591 "ffdhe2048", 00:18:57.591 "ffdhe3072", 00:18:57.591 "ffdhe4096", 00:18:57.591 "ffdhe6144", 00:18:57.591 "ffdhe8192" 00:18:57.591 ] 00:18:57.591 } 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "method": "bdev_nvme_attach_controller", 00:18:57.591 "params": { 00:18:57.591 "name": "TLSTEST", 00:18:57.591 "trtype": "TCP", 00:18:57.591 "adrfam": "IPv4", 00:18:57.591 "traddr": "10.0.0.2", 00:18:57.591 "trsvcid": "4420", 00:18:57.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.591 "prchk_reftag": false, 00:18:57.591 "prchk_guard": false, 00:18:57.591 "ctrlr_loss_timeout_sec": 0, 00:18:57.591 "reconnect_delay_sec": 0, 00:18:57.591 "fast_io_fail_timeout_sec": 0, 00:18:57.591 "psk": "key0", 00:18:57.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:57.591 "hdgst": false, 00:18:57.591 "ddgst": false, 00:18:57.591 "multipath": "multipath" 00:18:57.591 } 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "method": "bdev_nvme_set_hotplug", 00:18:57.591 "params": { 00:18:57.591 "period_us": 100000, 00:18:57.591 "enable": false 00:18:57.591 } 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "method": "bdev_wait_for_examine" 00:18:57.591 } 00:18:57.591 ] 00:18:57.591 }, 00:18:57.591 { 00:18:57.591 "subsystem": "nbd", 00:18:57.591 "config": [] 00:18:57.591 } 00:18:57.591 ] 00:18:57.591 }' 00:18:57.591 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1740924 00:18:57.591 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1740924 ']' 00:18:57.591 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1740924 00:18:57.591 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:57.591 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.592 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740924 00:18:57.592 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:57.592 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:57.592 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740924' 00:18:57.592 killing process with pid 1740924 00:18:57.592 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1740924 00:18:57.592 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.592 00:18:57.592 Latency(us) 00:18:57.592 [2024-12-06T10:20:30.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.592 [2024-12-06T10:20:30.530Z] =================================================================================================================== 00:18:57.592 [2024-12-06T10:20:30.530Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.592 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1740924 00:18:57.850 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1740581 00:18:57.850 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1740581 ']' 00:18:57.850 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1740581 00:18:57.850 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:57.850 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740581 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740581' 00:18:57.851 killing process with pid 1740581 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1740581 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1740581 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.851 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:57.851 "subsystems": [ 00:18:57.851 { 00:18:57.851 "subsystem": "keyring", 00:18:57.851 "config": [ 00:18:57.851 { 00:18:57.851 "method": "keyring_file_add_key", 00:18:57.851 "params": { 00:18:57.851 "name": "key0", 00:18:57.851 "path": "/tmp/tmp.hYEkwJb2VO" 00:18:57.851 } 00:18:57.851 } 00:18:57.851 ] 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "subsystem": "iobuf", 00:18:57.851 "config": [ 00:18:57.851 { 00:18:57.851 "method": "iobuf_set_options", 00:18:57.851 "params": { 00:18:57.851 "small_pool_count": 8192, 00:18:57.851 "large_pool_count": 1024, 00:18:57.851 "small_bufsize": 8192, 00:18:57.851 "large_bufsize": 135168, 00:18:57.851 "enable_numa": false 00:18:57.851 } 00:18:57.851 } 00:18:57.851 ] 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "subsystem": "sock", 00:18:57.851 "config": [ 00:18:57.851 { 00:18:57.851 "method": "sock_set_default_impl", 00:18:57.851 "params": { 00:18:57.851 "impl_name": "posix" 00:18:57.851 } 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "method": "sock_impl_set_options", 00:18:57.851 "params": { 00:18:57.851 "impl_name": "ssl", 00:18:57.851 "recv_buf_size": 4096, 00:18:57.851 "send_buf_size": 4096, 00:18:57.851 "enable_recv_pipe": true, 00:18:57.851 "enable_quickack": false, 00:18:57.851 "enable_placement_id": 0, 00:18:57.851 "enable_zerocopy_send_server": true, 00:18:57.851 "enable_zerocopy_send_client": false, 00:18:57.851 "zerocopy_threshold": 0, 00:18:57.851 "tls_version": 0, 00:18:57.851 "enable_ktls": false 00:18:57.851 } 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "method": "sock_impl_set_options", 00:18:57.851 "params": { 00:18:57.851 "impl_name": "posix", 00:18:57.851 "recv_buf_size": 2097152, 00:18:57.851 "send_buf_size": 2097152, 00:18:57.851 "enable_recv_pipe": true, 00:18:57.851 "enable_quickack": false, 00:18:57.851 "enable_placement_id": 0, 00:18:57.851 "enable_zerocopy_send_server": true, 00:18:57.851 "enable_zerocopy_send_client": false, 00:18:57.851 "zerocopy_threshold": 0, 00:18:57.851 "tls_version": 0, 00:18:57.851 "enable_ktls": false 00:18:57.851 } 00:18:57.851 } 00:18:57.851 ] 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "subsystem": "vmd", 00:18:57.851 "config": [] 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "subsystem": "accel", 00:18:57.851 "config": [ 00:18:57.851 { 00:18:57.851 "method": "accel_set_options", 00:18:57.851 "params": { 00:18:57.851 "small_cache_size": 128, 00:18:57.851 "large_cache_size": 16, 00:18:57.851 "task_count": 2048, 00:18:57.851 "sequence_count": 2048, 00:18:57.851 "buf_count": 2048 00:18:57.851 } 00:18:57.851 } 00:18:57.851 ] 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "subsystem": "bdev", 00:18:57.851 "config": [ 00:18:57.851 { 00:18:57.851 "method": "bdev_set_options", 00:18:57.851 "params": { 00:18:57.851 "bdev_io_pool_size": 65535, 00:18:57.851 "bdev_io_cache_size": 256, 00:18:57.851 "bdev_auto_examine": true, 00:18:57.851 "iobuf_small_cache_size": 128, 00:18:57.851 "iobuf_large_cache_size": 16 00:18:57.851 } 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "method": "bdev_raid_set_options", 00:18:57.851 "params": { 00:18:57.851 "process_window_size_kb": 1024, 00:18:57.851 "process_max_bandwidth_mb_sec": 0 00:18:57.851 } 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "method": "bdev_iscsi_set_options", 00:18:57.851 "params": { 00:18:57.851 "timeout_sec": 30 00:18:57.851 } 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "method": "bdev_nvme_set_options", 00:18:57.851 "params": { 00:18:57.851 "action_on_timeout": "none", 00:18:57.851 "timeout_us": 0, 00:18:57.851 "timeout_admin_us": 0, 00:18:57.851 "keep_alive_timeout_ms": 10000, 00:18:57.851 "arbitration_burst": 0, 00:18:57.851 "low_priority_weight": 0, 00:18:57.851 "medium_priority_weight": 0, 00:18:57.851 "high_priority_weight": 0, 00:18:57.851 "nvme_adminq_poll_period_us": 10000, 00:18:57.851 "nvme_ioq_poll_period_us": 0, 00:18:57.851 "io_queue_requests": 0, 00:18:57.851 "delay_cmd_submit": true, 00:18:57.851 "transport_retry_count": 4, 00:18:57.851 "bdev_retry_count": 3, 00:18:57.851 "transport_ack_timeout": 0, 00:18:57.851 "ctrlr_loss_timeout_sec": 0, 00:18:57.851 "reconnect_delay_sec": 0, 00:18:57.851 "fast_io_fail_timeout_sec": 0, 00:18:57.851 "disable_auto_failback": false, 00:18:57.851 "generate_uuids": false, 00:18:57.851 "transport_tos": 0, 00:18:57.851 "nvme_error_stat": false, 00:18:57.851 "rdma_srq_size": 0, 00:18:57.851 "io_path_stat": false, 00:18:57.851 "allow_accel_sequence": false, 00:18:57.851 "rdma_max_cq_size": 0, 00:18:57.851 "rdma_cm_event_timeout_ms": 0, 00:18:57.851 "dhchap_digests": [ 00:18:57.851 "sha256", 00:18:57.851 "sha384", 00:18:57.851 "sha512" 00:18:57.851 ], 00:18:57.851 "dhchap_dhgroups": [ 00:18:57.851 "null", 00:18:57.851 "ffdhe2048", 00:18:57.851 "ffdhe3072", 00:18:57.851 "ffdhe4096", 00:18:57.851 "ffdhe6144", 00:18:57.851 "ffdhe8192" 00:18:57.851 ] 00:18:57.851 } 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "method": "bdev_nvme_set_hotplug", 00:18:57.851 "params": { 00:18:57.851 "period_us": 100000, 00:18:57.851 "enable": false 00:18:57.851 } 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "method": "bdev_malloc_create", 00:18:57.851 "params": { 00:18:57.851 "name": "malloc0", 00:18:57.851 "num_blocks": 8192, 00:18:57.851 "block_size": 4096, 00:18:57.851 "physical_block_size": 4096, 00:18:57.851 "uuid": "8661b44b-d994-41d1-97ff-fbcef50fd679", 00:18:57.851 "optimal_io_boundary": 0, 00:18:57.851 "md_size": 0, 00:18:57.851 "dif_type": 0, 00:18:57.851 "dif_is_head_of_md": false, 00:18:57.851 "dif_pi_format": 0 00:18:57.851 } 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "method": "bdev_wait_for_examine" 00:18:57.851 } 00:18:57.851 ] 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "subsystem": "nbd", 00:18:57.851 "config": [] 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "subsystem": "scheduler", 00:18:57.851 "config": [ 00:18:57.851 { 00:18:57.851 "method": "framework_set_scheduler", 00:18:57.851 "params": { 00:18:57.851 "name": "static" 00:18:57.851 } 00:18:57.851 } 00:18:57.851 ] 00:18:57.851 }, 00:18:57.851 { 00:18:57.851 "subsystem": "nvmf", 00:18:57.851 "config": [ 00:18:57.851 { 00:18:57.851 "method": "nvmf_set_config", 00:18:57.851 "params": { 00:18:57.851 "discovery_filter": "match_any", 00:18:57.851 "admin_cmd_passthru": { 00:18:57.851 "identify_ctrlr": false 00:18:57.851 }, 00:18:57.851 "dhchap_digests": [ 00:18:57.851 "sha256", 00:18:57.851 "sha384", 00:18:57.851 "sha512" 00:18:57.851 ], 00:18:57.851 "dhchap_dhgroups": [ 00:18:57.851 "null", 00:18:57.852 "ffdhe2048", 00:18:57.852 "ffdhe3072", 00:18:57.852 "ffdhe4096", 00:18:57.852 "ffdhe6144", 00:18:57.852 "ffdhe8192" 00:18:57.852 ] 00:18:57.852 } 00:18:57.852 }, 00:18:57.852 { 00:18:57.852 "method": "nvmf_set_max_subsystems", 00:18:57.852 "params": { 00:18:57.852 "max_subsystems": 1024 00:18:57.852 } 00:18:57.852 }, 00:18:57.852 { 00:18:57.852 "method": "nvmf_set_crdt", 00:18:57.852 "params": { 00:18:57.852 "crdt1": 0, 00:18:57.852 "crdt2": 0, 00:18:57.852 "crdt3": 0 00:18:57.852 } 00:18:57.852 }, 00:18:57.852 { 00:18:57.852 "method": "nvmf_create_transport", 00:18:57.852 "params": { 00:18:57.852 "trtype": "TCP", 00:18:57.852 "max_queue_depth": 128, 00:18:57.852 "max_io_qpairs_per_ctrlr": 127, 00:18:57.852 "in_capsule_data_size": 4096, 00:18:57.852 "max_io_size": 131072, 00:18:57.852 "io_unit_size": 131072, 00:18:57.852 "max_aq_depth": 128, 00:18:57.852 "num_shared_buffers": 511, 00:18:57.852 "buf_cache_size": 4294967295, 00:18:57.852 "dif_insert_or_strip": false, 00:18:57.852 "zcopy": false, 00:18:57.852 "c2h_success": false, 00:18:57.852 "sock_priority": 0, 00:18:57.852 "abort_timeout_sec": 1, 00:18:57.852 "ack_timeout": 0, 00:18:57.852 "data_wr_pool_size": 0 00:18:57.852 } 00:18:57.852 }, 00:18:57.852 { 00:18:57.852 "method": "nvmf_create_subsystem", 00:18:57.852 "params": { 00:18:57.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.852 "allow_any_host": false, 00:18:57.852 "serial_number": "SPDK00000000000001", 00:18:57.852 "model_number": "SPDK bdev Controller", 00:18:57.852 "max_namespaces": 10, 00:18:57.852 "min_cntlid": 1, 00:18:57.852 "max_cntlid": 65519, 00:18:57.852 "ana_reporting": false 00:18:57.852 } 00:18:57.852 }, 00:18:57.852 { 00:18:57.852 "method": "nvmf_subsystem_add_host", 00:18:57.852 "params": { 00:18:57.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.852 "host": "nqn.2016-06.io.spdk:host1", 00:18:57.852 "psk": "key0" 00:18:57.852 } 00:18:57.852 }, 00:18:57.852 { 00:18:57.852 "method": "nvmf_subsystem_add_ns", 00:18:57.852 "params": { 00:18:57.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.852 "namespace": { 00:18:57.852 "nsid": 1, 00:18:57.852 "bdev_name": "malloc0", 00:18:57.852 "nguid": "8661B44BD99441D197FFFBCEF50FD679", 00:18:57.852 "uuid": "8661b44b-d994-41d1-97ff-fbcef50fd679", 00:18:57.852 "no_auto_visible": false 00:18:57.852 } 00:18:57.852 } 00:18:57.852 }, 00:18:57.852 { 00:18:57.852 "method": "nvmf_subsystem_add_listener", 00:18:57.852 "params": { 00:18:57.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.852 "listen_address": { 00:18:57.852 "trtype": "TCP", 00:18:57.852 "adrfam": "IPv4", 00:18:57.852 "traddr": "10.0.0.2", 00:18:57.852 "trsvcid": "4420" 00:18:57.852 }, 00:18:57.852 "secure_channel": true 00:18:57.852 } 00:18:57.852 } 00:18:57.852 ] 00:18:57.852 } 00:18:57.852 ] 00:18:57.852 }' 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1741221 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1741221 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1741221 ']' 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.852 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.110 [2024-12-06 11:20:30.818917] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:58.110 [2024-12-06 11:20:30.818963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.110 [2024-12-06 11:20:30.883687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.110 [2024-12-06 11:20:30.921232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.110 [2024-12-06 11:20:30.921267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.110 [2024-12-06 11:20:30.921274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.110 [2024-12-06 11:20:30.921279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.110 [2024-12-06 11:20:30.921284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.110 [2024-12-06 11:20:30.921856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.369 [2024-12-06 11:20:31.133048] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.369 [2024-12-06 11:20:31.165084] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.369 [2024-12-06 11:20:31.165270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1741264 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1741264 /var/tmp/bdevperf.sock 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1741264 ']' 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.937 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:58.937 "subsystems": [ 00:18:58.937 { 00:18:58.937 "subsystem": "keyring", 00:18:58.937 "config": [ 00:18:58.937 { 00:18:58.937 "method": "keyring_file_add_key", 00:18:58.937 "params": { 00:18:58.937 "name": "key0", 00:18:58.937 "path": "/tmp/tmp.hYEkwJb2VO" 00:18:58.937 } 00:18:58.937 } 00:18:58.937 ] 00:18:58.937 }, 00:18:58.937 { 00:18:58.937 "subsystem": "iobuf", 00:18:58.937 "config": [ 00:18:58.937 { 00:18:58.937 "method": "iobuf_set_options", 00:18:58.937 "params": { 00:18:58.937 "small_pool_count": 8192, 00:18:58.937 "large_pool_count": 1024, 00:18:58.937 "small_bufsize": 8192, 00:18:58.937 "large_bufsize": 135168, 00:18:58.937 "enable_numa": false 00:18:58.937 } 00:18:58.938 } 00:18:58.938 ] 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "subsystem": "sock", 00:18:58.938 "config": [ 00:18:58.938 { 00:18:58.938 "method": "sock_set_default_impl", 00:18:58.938 "params": { 00:18:58.938 "impl_name": "posix" 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "method": "sock_impl_set_options", 00:18:58.938 "params": { 00:18:58.938 "impl_name": "ssl", 00:18:58.938 "recv_buf_size": 4096, 00:18:58.938 "send_buf_size": 4096, 00:18:58.938 "enable_recv_pipe": true, 00:18:58.938 "enable_quickack": false, 00:18:58.938 "enable_placement_id": 0, 00:18:58.938 "enable_zerocopy_send_server": true, 00:18:58.938 "enable_zerocopy_send_client": false, 00:18:58.938 "zerocopy_threshold": 0, 00:18:58.938 "tls_version": 0, 00:18:58.938 "enable_ktls": false 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "method": "sock_impl_set_options", 00:18:58.938 "params": { 00:18:58.938 "impl_name": "posix", 00:18:58.938 "recv_buf_size": 2097152, 00:18:58.938 "send_buf_size": 2097152, 00:18:58.938 "enable_recv_pipe": true, 00:18:58.938 "enable_quickack": false, 00:18:58.938 "enable_placement_id": 0, 00:18:58.938 "enable_zerocopy_send_server": true, 00:18:58.938 "enable_zerocopy_send_client": false, 00:18:58.938 "zerocopy_threshold": 0, 00:18:58.938 "tls_version": 0, 00:18:58.938 "enable_ktls": false 00:18:58.938 } 00:18:58.938 } 00:18:58.938 ] 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "subsystem": "vmd", 00:18:58.938 "config": [] 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "subsystem": "accel", 00:18:58.938 "config": [ 00:18:58.938 { 00:18:58.938 "method": "accel_set_options", 00:18:58.938 "params": { 00:18:58.938 "small_cache_size": 128, 00:18:58.938 "large_cache_size": 16, 00:18:58.938 "task_count": 2048, 00:18:58.938 "sequence_count": 2048, 00:18:58.938 "buf_count": 2048 00:18:58.938 } 00:18:58.938 } 00:18:58.938 ] 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "subsystem": "bdev", 00:18:58.938 "config": [ 00:18:58.938 { 00:18:58.938 "method": "bdev_set_options", 00:18:58.938 "params": { 00:18:58.938 "bdev_io_pool_size": 65535, 00:18:58.938 "bdev_io_cache_size": 256, 00:18:58.938 "bdev_auto_examine": true, 00:18:58.938 "iobuf_small_cache_size": 128, 00:18:58.938 "iobuf_large_cache_size": 16 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "method": "bdev_raid_set_options", 00:18:58.938 "params": { 00:18:58.938 "process_window_size_kb": 1024, 00:18:58.938 "process_max_bandwidth_mb_sec": 0 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "method": "bdev_iscsi_set_options", 00:18:58.938 "params": { 00:18:58.938 "timeout_sec": 30 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "method": "bdev_nvme_set_options", 00:18:58.938 "params": { 00:18:58.938 "action_on_timeout": "none", 00:18:58.938 "timeout_us": 0, 00:18:58.938 "timeout_admin_us": 0, 00:18:58.938 "keep_alive_timeout_ms": 10000, 00:18:58.938 "arbitration_burst": 0, 00:18:58.938 "low_priority_weight": 0, 00:18:58.938 "medium_priority_weight": 0, 00:18:58.938 "high_priority_weight": 0, 00:18:58.938 "nvme_adminq_poll_period_us": 10000, 00:18:58.938 "nvme_ioq_poll_period_us": 0, 00:18:58.938 "io_queue_requests": 512, 00:18:58.938 "delay_cmd_submit": true, 00:18:58.938 "transport_retry_count": 4, 00:18:58.938 "bdev_retry_count": 3, 00:18:58.938 "transport_ack_timeout": 0, 00:18:58.938 "ctrlr_loss_timeout_sec": 0, 00:18:58.938 "reconnect_delay_sec": 0, 00:18:58.938 "fast_io_fail_timeout_sec": 0, 00:18:58.938 "disable_auto_failback": false, 00:18:58.938 "generate_uuids": false, 00:18:58.938 "transport_tos": 0, 00:18:58.938 "nvme_error_stat": false, 00:18:58.938 "rdma_srq_size": 0, 00:18:58.938 "io_path_stat": false, 00:18:58.938 "allow_accel_sequence": false, 00:18:58.938 "rdma_max_cq_size": 0, 00:18:58.938 "rdma_cm_event_timeout_ms": 0, 00:18:58.938 "dhchap_digests": [ 00:18:58.938 "sha256", 00:18:58.938 "sha384", 00:18:58.938 "sha512" 00:18:58.938 ], 00:18:58.938 "dhchap_dhgroups": [ 00:18:58.938 "null", 00:18:58.938 "ffdhe2048", 00:18:58.938 "ffdhe3072", 00:18:58.938 "ffdhe4096", 00:18:58.938 "ffdhe6144", 00:18:58.938 "ffdhe8192" 00:18:58.938 ] 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "method": "bdev_nvme_attach_controller", 00:18:58.938 "params": { 00:18:58.938 "name": "TLSTEST", 00:18:58.938 "trtype": "TCP", 00:18:58.938 "adrfam": "IPv4", 00:18:58.938 "traddr": "10.0.0.2", 00:18:58.938 "trsvcid": "4420", 00:18:58.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.938 "prchk_reftag": false, 00:18:58.938 "prchk_guard": false, 00:18:58.938 "ctrlr_loss_timeout_sec": 0, 00:18:58.938 "reconnect_delay_sec": 0, 00:18:58.938 "fast_io_fail_timeout_sec": 0, 00:18:58.938 "psk": "key0", 00:18:58.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.938 "hdgst": false, 00:18:58.938 "ddgst": false, 00:18:58.938 "multipath": "multipath" 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "method": "bdev_nvme_set_hotplug", 00:18:58.938 "params": { 00:18:58.938 "period_us": 100000, 00:18:58.938 "enable": false 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "method": "bdev_wait_for_examine" 00:18:58.938 } 00:18:58.938 ] 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "subsystem": "nbd", 00:18:58.938 "config": [] 00:18:58.938 } 00:18:58.938 ] 00:18:58.938 }' 00:18:58.938 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.938 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.938 [2024-12-06 11:20:31.705599] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:18:58.938 [2024-12-06 11:20:31.705644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741264 ] 00:18:58.938 [2024-12-06 11:20:31.779235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.938 [2024-12-06 11:20:31.818442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.197 [2024-12-06 11:20:31.971076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.764 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.764 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:59.764 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:59.764 Running I/O for 10 seconds... 00:19:02.072 4874.00 IOPS, 19.04 MiB/s [2024-12-06T10:20:35.940Z] 4866.50 IOPS, 19.01 MiB/s [2024-12-06T10:20:36.873Z] 4925.00 IOPS, 19.24 MiB/s [2024-12-06T10:20:37.806Z] 4995.00 IOPS, 19.51 MiB/s [2024-12-06T10:20:38.744Z] 4990.80 IOPS, 19.50 MiB/s [2024-12-06T10:20:39.681Z] 4949.17 IOPS, 19.33 MiB/s [2024-12-06T10:20:41.057Z] 4964.00 IOPS, 19.39 MiB/s [2024-12-06T10:20:41.992Z] 4916.25 IOPS, 19.20 MiB/s [2024-12-06T10:20:42.929Z] 4927.56 IOPS, 19.25 MiB/s [2024-12-06T10:20:42.929Z] 4940.70 IOPS, 19.30 MiB/s 00:19:09.991 Latency(us) 00:19:09.991 [2024-12-06T10:20:42.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.992 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:09.992 Verification LBA range: start 0x0 length 0x2000 00:19:09.992 TLSTESTn1 : 10.02 4945.51 19.32 0.00 0.00 25847.59 5898.24 31933.91 00:19:09.992 [2024-12-06T10:20:42.930Z] =================================================================================================================== 00:19:09.992 [2024-12-06T10:20:42.930Z] Total : 4945.51 19.32 0.00 0.00 25847.59 5898.24 31933.91 00:19:09.992 { 00:19:09.992 "results": [ 00:19:09.992 { 00:19:09.992 "job": "TLSTESTn1", 00:19:09.992 "core_mask": "0x4", 00:19:09.992 "workload": "verify", 00:19:09.992 "status": "finished", 00:19:09.992 "verify_range": { 00:19:09.992 "start": 0, 00:19:09.992 "length": 8192 00:19:09.992 }, 00:19:09.992 "queue_depth": 128, 00:19:09.992 "io_size": 4096, 00:19:09.992 "runtime": 10.016146, 00:19:09.992 "iops": 4945.514971526973, 00:19:09.992 "mibps": 19.318417857527237, 00:19:09.992 "io_failed": 0, 00:19:09.992 "io_timeout": 0, 00:19:09.992 "avg_latency_us": 25847.59276427136, 00:19:09.992 "min_latency_us": 5898.24, 00:19:09.992 "max_latency_us": 31933.905454545453 00:19:09.992 } 00:19:09.992 ], 00:19:09.992 "core_count": 1 00:19:09.992 } 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1741264 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1741264 ']' 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1741264 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1741264 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1741264' 00:19:09.992 killing process with pid 1741264 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1741264 00:19:09.992 Received shutdown signal, test time was about 10.000000 seconds 00:19:09.992 00:19:09.992 Latency(us) 00:19:09.992 [2024-12-06T10:20:42.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.992 [2024-12-06T10:20:42.930Z] =================================================================================================================== 00:19:09.992 [2024-12-06T10:20:42.930Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1741264 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1741221 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1741221 ']' 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1741221 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.992 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1741221 00:19:10.251 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.251 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.251 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1741221' 00:19:10.251 killing process with pid 1741221 00:19:10.251 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1741221 00:19:10.251 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1741221 00:19:10.251 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:10.251 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.251 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.251 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.251 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1743348 00:19:10.251 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1743348 00:19:10.251 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:10.251 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1743348 ']' 00:19:10.252 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.252 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.252 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.252 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.252 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.252 [2024-12-06 11:20:43.165449] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:19:10.252 [2024-12-06 11:20:43.165495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.511 [2024-12-06 11:20:43.237587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.511 [2024-12-06 11:20:43.275894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.511 [2024-12-06 11:20:43.275929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.511 [2024-12-06 11:20:43.275936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.511 [2024-12-06 11:20:43.275941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.511 [2024-12-06 11:20:43.275946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.511 [2024-12-06 11:20:43.276520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.079 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.079 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:11.079 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.079 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.079 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.079 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.079 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.hYEkwJb2VO 00:19:11.079 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hYEkwJb2VO 00:19:11.079 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.338 [2024-12-06 11:20:44.169840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.338 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:11.596 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:11.596 [2024-12-06 11:20:44.526743] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.596 [2024-12-06 11:20:44.526926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.855 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:11.855 malloc0 00:19:11.855 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:12.114 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hYEkwJb2VO 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1743766 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1743766 /var/tmp/bdevperf.sock 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1743766 ']' 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.372 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.631 [2024-12-06 11:20:45.332168] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:19:12.631 [2024-12-06 11:20:45.332214] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743766 ] 00:19:12.631 [2024-12-06 11:20:45.406383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.631 [2024-12-06 11:20:45.445758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.631 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.631 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:12.631 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hYEkwJb2VO 00:19:12.889 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:13.148 [2024-12-06 11:20:45.877580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.148 nvme0n1 00:19:13.148 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:13.148 Running I/O for 1 seconds... 00:19:14.526 5706.00 IOPS, 22.29 MiB/s 00:19:14.526 Latency(us) 00:19:14.526 [2024-12-06T10:20:47.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.526 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:14.526 Verification LBA range: start 0x0 length 0x2000 00:19:14.526 nvme0n1 : 1.02 5723.69 22.36 0.00 0.00 22182.52 7060.01 23354.65 00:19:14.526 [2024-12-06T10:20:47.464Z] =================================================================================================================== 00:19:14.526 [2024-12-06T10:20:47.464Z] Total : 5723.69 22.36 0.00 0.00 22182.52 7060.01 23354.65 00:19:14.526 { 00:19:14.526 "results": [ 00:19:14.526 { 00:19:14.526 "job": "nvme0n1", 00:19:14.526 "core_mask": "0x2", 00:19:14.526 "workload": "verify", 00:19:14.526 "status": "finished", 00:19:14.526 "verify_range": { 00:19:14.526 "start": 0, 00:19:14.526 "length": 8192 00:19:14.526 }, 00:19:14.526 "queue_depth": 128, 00:19:14.526 "io_size": 4096, 00:19:14.526 "runtime": 1.019273, 00:19:14.526 "iops": 5723.687373255251, 00:19:14.526 "mibps": 22.358153801778325, 00:19:14.526 "io_failed": 0, 00:19:14.526 "io_timeout": 0, 00:19:14.526 "avg_latency_us": 22182.517520491165, 00:19:14.526 "min_latency_us": 7060.014545454545, 00:19:14.526 "max_latency_us": 23354.647272727274 00:19:14.526 } 00:19:14.526 ], 00:19:14.526 "core_count": 1 00:19:14.526 } 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1743766 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1743766 ']' 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1743766 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743766 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743766' 00:19:14.526 killing process with pid 1743766 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1743766 00:19:14.526 Received shutdown signal, test time was about 1.000000 seconds 00:19:14.526 00:19:14.526 Latency(us) 00:19:14.526 [2024-12-06T10:20:47.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.526 [2024-12-06T10:20:47.464Z] =================================================================================================================== 00:19:14.526 [2024-12-06T10:20:47.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1743766 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1743348 00:19:14.526 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1743348 ']' 00:19:14.527 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1743348 00:19:14.527 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:14.527 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.527 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743348 00:19:14.527 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.527 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.527 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743348' 00:19:14.527 killing process with pid 1743348 00:19:14.527 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1743348 00:19:14.527 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1743348 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1744179 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1744179 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1744179 ']' 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.787 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.787 [2024-12-06 11:20:47.573368] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:19:14.787 [2024-12-06 11:20:47.573411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.787 [2024-12-06 11:20:47.650003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.787 [2024-12-06 11:20:47.687607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.787 [2024-12-06 11:20:47.687642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.787 [2024-12-06 11:20:47.687648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.787 [2024-12-06 11:20:47.687654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.787 [2024-12-06 11:20:47.687658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.787 [2024-12-06 11:20:47.688206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.726 [2024-12-06 11:20:48.429204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.726 malloc0 00:19:15.726 [2024-12-06 11:20:48.457268] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:15.726 [2024-12-06 11:20:48.457474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1744451 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1744451 /var/tmp/bdevperf.sock 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1744451 ']' 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.726 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.726 [2024-12-06 11:20:48.532858] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:19:15.726 [2024-12-06 11:20:48.532898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744451 ] 00:19:15.726 [2024-12-06 11:20:48.605737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.726 [2024-12-06 11:20:48.643213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.985 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.985 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.985 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hYEkwJb2VO 00:19:16.243 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:16.243 [2024-12-06 11:20:49.074763] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.243 nvme0n1 00:19:16.243 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:16.502 Running I/O for 1 seconds... 00:19:17.503 5375.00 IOPS, 21.00 MiB/s 00:19:17.503 Latency(us) 00:19:17.503 [2024-12-06T10:20:50.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.503 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:17.503 Verification LBA range: start 0x0 length 0x2000 00:19:17.503 nvme0n1 : 1.01 5423.53 21.19 0.00 0.00 23445.37 5898.24 28478.37 00:19:17.503 [2024-12-06T10:20:50.441Z] =================================================================================================================== 00:19:17.503 [2024-12-06T10:20:50.441Z] Total : 5423.53 21.19 0.00 0.00 23445.37 5898.24 28478.37 00:19:17.503 { 00:19:17.503 "results": [ 00:19:17.503 { 00:19:17.503 "job": "nvme0n1", 00:19:17.503 "core_mask": "0x2", 00:19:17.503 "workload": "verify", 00:19:17.503 "status": "finished", 00:19:17.503 "verify_range": { 00:19:17.503 "start": 0, 00:19:17.503 "length": 8192 00:19:17.503 }, 00:19:17.503 "queue_depth": 128, 00:19:17.503 "io_size": 4096, 00:19:17.503 "runtime": 1.014653, 00:19:17.503 "iops": 5423.529029136069, 00:19:17.503 "mibps": 21.18566027006277, 00:19:17.503 "io_failed": 0, 00:19:17.503 "io_timeout": 0, 00:19:17.503 "avg_latency_us": 23445.36785687146, 00:19:17.503 "min_latency_us": 5898.24, 00:19:17.503 "max_latency_us": 28478.37090909091 00:19:17.503 } 00:19:17.503 ], 00:19:17.503 "core_count": 1 00:19:17.503 } 00:19:17.503 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:17.503 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.503 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.504 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:17.504 "subsystems": [ 00:19:17.504 { 00:19:17.504 "subsystem": "keyring", 00:19:17.504 "config": [ 00:19:17.504 { 00:19:17.504 "method": "keyring_file_add_key", 00:19:17.504 "params": { 00:19:17.504 "name": "key0", 00:19:17.504 "path": "/tmp/tmp.hYEkwJb2VO" 00:19:17.504 } 00:19:17.504 } 00:19:17.504 ] 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "subsystem": "iobuf", 00:19:17.504 "config": [ 00:19:17.504 { 00:19:17.504 "method": "iobuf_set_options", 00:19:17.504 "params": { 00:19:17.504 "small_pool_count": 8192, 00:19:17.504 "large_pool_count": 1024, 00:19:17.504 "small_bufsize": 8192, 00:19:17.504 "large_bufsize": 135168, 00:19:17.504 "enable_numa": false 00:19:17.504 } 00:19:17.504 } 00:19:17.504 ] 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "subsystem": "sock", 00:19:17.504 "config": [ 00:19:17.504 { 00:19:17.504 "method": "sock_set_default_impl", 00:19:17.504 "params": { 00:19:17.504 "impl_name": "posix" 00:19:17.504 } 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "method": "sock_impl_set_options", 00:19:17.504 "params": { 00:19:17.504 "impl_name": "ssl", 00:19:17.504 "recv_buf_size": 4096, 00:19:17.504 "send_buf_size": 4096, 00:19:17.504 "enable_recv_pipe": true, 00:19:17.504 "enable_quickack": false, 00:19:17.504 "enable_placement_id": 0, 00:19:17.504 "enable_zerocopy_send_server": true, 00:19:17.504 "enable_zerocopy_send_client": false, 00:19:17.504 "zerocopy_threshold": 0, 00:19:17.504 "tls_version": 0, 00:19:17.504 "enable_ktls": false 00:19:17.504 } 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "method": "sock_impl_set_options", 00:19:17.504 "params": { 00:19:17.504 "impl_name": "posix", 00:19:17.504 "recv_buf_size": 2097152, 00:19:17.504 "send_buf_size": 2097152, 00:19:17.504 "enable_recv_pipe": true, 00:19:17.504 "enable_quickack": false, 00:19:17.504 "enable_placement_id": 0, 00:19:17.504 "enable_zerocopy_send_server": true, 00:19:17.504 "enable_zerocopy_send_client": false, 00:19:17.504 "zerocopy_threshold": 0, 00:19:17.504 "tls_version": 0, 00:19:17.504 "enable_ktls": false 00:19:17.504 } 00:19:17.504 } 00:19:17.504 ] 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "subsystem": "vmd", 00:19:17.504 "config": [] 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "subsystem": "accel", 00:19:17.504 "config": [ 00:19:17.504 { 00:19:17.504 "method": "accel_set_options", 00:19:17.504 "params": { 00:19:17.504 "small_cache_size": 128, 00:19:17.504 "large_cache_size": 16, 00:19:17.504 "task_count": 2048, 00:19:17.504 "sequence_count": 2048, 00:19:17.504 "buf_count": 2048 00:19:17.504 } 00:19:17.504 } 00:19:17.504 ] 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "subsystem": "bdev", 00:19:17.504 "config": [ 00:19:17.504 { 00:19:17.504 "method": "bdev_set_options", 00:19:17.504 "params": { 00:19:17.504 "bdev_io_pool_size": 65535, 00:19:17.504 "bdev_io_cache_size": 256, 00:19:17.504 "bdev_auto_examine": true, 00:19:17.504 "iobuf_small_cache_size": 128, 00:19:17.504 "iobuf_large_cache_size": 16 00:19:17.504 } 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "method": "bdev_raid_set_options", 00:19:17.504 "params": { 00:19:17.504 "process_window_size_kb": 1024, 00:19:17.504 "process_max_bandwidth_mb_sec": 0 00:19:17.504 } 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "method": "bdev_iscsi_set_options", 00:19:17.504 "params": { 00:19:17.504 "timeout_sec": 30 00:19:17.504 } 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "method": "bdev_nvme_set_options", 00:19:17.504 "params": { 00:19:17.504 "action_on_timeout": "none", 00:19:17.504 "timeout_us": 0, 00:19:17.504 "timeout_admin_us": 0, 00:19:17.504 "keep_alive_timeout_ms": 10000, 00:19:17.504 "arbitration_burst": 0, 00:19:17.504 "low_priority_weight": 0, 00:19:17.504 "medium_priority_weight": 0, 00:19:17.504 "high_priority_weight": 0, 00:19:17.504 "nvme_adminq_poll_period_us": 10000, 00:19:17.504 "nvme_ioq_poll_period_us": 0, 00:19:17.504 "io_queue_requests": 0, 00:19:17.504 "delay_cmd_submit": true, 00:19:17.504 "transport_retry_count": 4, 00:19:17.504 "bdev_retry_count": 3, 00:19:17.504 "transport_ack_timeout": 0, 00:19:17.504 "ctrlr_loss_timeout_sec": 0, 00:19:17.504 "reconnect_delay_sec": 0, 00:19:17.504 "fast_io_fail_timeout_sec": 0, 00:19:17.504 "disable_auto_failback": false, 00:19:17.504 "generate_uuids": false, 00:19:17.504 "transport_tos": 0, 00:19:17.504 "nvme_error_stat": false, 00:19:17.504 "rdma_srq_size": 0, 00:19:17.504 "io_path_stat": false, 00:19:17.504 "allow_accel_sequence": false, 00:19:17.504 "rdma_max_cq_size": 0, 00:19:17.504 "rdma_cm_event_timeout_ms": 0, 00:19:17.504 "dhchap_digests": [ 00:19:17.504 "sha256", 00:19:17.504 "sha384", 00:19:17.504 "sha512" 00:19:17.504 ], 00:19:17.504 "dhchap_dhgroups": [ 00:19:17.504 "null", 00:19:17.504 "ffdhe2048", 00:19:17.504 "ffdhe3072", 00:19:17.504 "ffdhe4096", 00:19:17.504 "ffdhe6144", 00:19:17.504 "ffdhe8192" 00:19:17.504 ] 00:19:17.504 } 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "method": "bdev_nvme_set_hotplug", 00:19:17.504 "params": { 00:19:17.504 "period_us": 100000, 00:19:17.504 "enable": false 00:19:17.504 } 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "method": "bdev_malloc_create", 00:19:17.504 "params": { 00:19:17.504 "name": "malloc0", 00:19:17.504 "num_blocks": 8192, 00:19:17.504 "block_size": 4096, 00:19:17.504 "physical_block_size": 4096, 00:19:17.504 "uuid": "c5c423f5-ba22-4303-b42b-2cb9b5bfeb69", 00:19:17.504 "optimal_io_boundary": 0, 00:19:17.504 "md_size": 0, 00:19:17.504 "dif_type": 0, 00:19:17.504 "dif_is_head_of_md": false, 00:19:17.504 "dif_pi_format": 0 00:19:17.504 } 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "method": "bdev_wait_for_examine" 00:19:17.504 } 00:19:17.504 ] 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "subsystem": "nbd", 00:19:17.504 "config": [] 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "subsystem": "scheduler", 00:19:17.504 "config": [ 00:19:17.504 { 00:19:17.504 "method": "framework_set_scheduler", 00:19:17.504 "params": { 00:19:17.504 "name": "static" 00:19:17.504 } 00:19:17.504 } 00:19:17.504 ] 00:19:17.504 }, 00:19:17.504 { 00:19:17.504 "subsystem": "nvmf", 00:19:17.504 "config": [ 00:19:17.504 { 00:19:17.504 "method": "nvmf_set_config", 00:19:17.505 "params": { 00:19:17.505 "discovery_filter": "match_any", 00:19:17.505 "admin_cmd_passthru": { 00:19:17.505 "identify_ctrlr": false 00:19:17.505 }, 00:19:17.505 "dhchap_digests": [ 00:19:17.505 "sha256", 00:19:17.505 "sha384", 00:19:17.505 "sha512" 00:19:17.505 ], 00:19:17.505 "dhchap_dhgroups": [ 00:19:17.505 "null", 00:19:17.505 "ffdhe2048", 00:19:17.505 "ffdhe3072", 00:19:17.505 "ffdhe4096", 00:19:17.505 "ffdhe6144", 00:19:17.505 "ffdhe8192" 00:19:17.505 ] 00:19:17.505 } 00:19:17.505 }, 00:19:17.505 { 00:19:17.505 "method": "nvmf_set_max_subsystems", 00:19:17.505 "params": { 00:19:17.505 "max_subsystems": 1024 00:19:17.505 } 00:19:17.505 }, 00:19:17.505 { 00:19:17.505 "method": "nvmf_set_crdt", 00:19:17.505 "params": { 00:19:17.505 "crdt1": 0, 00:19:17.505 "crdt2": 0, 00:19:17.505 "crdt3": 0 00:19:17.505 } 00:19:17.505 }, 00:19:17.505 { 00:19:17.505 "method": "nvmf_create_transport", 00:19:17.505 "params": { 00:19:17.505 "trtype": "TCP", 00:19:17.505 "max_queue_depth": 128, 00:19:17.505 "max_io_qpairs_per_ctrlr": 127, 00:19:17.505 "in_capsule_data_size": 4096, 00:19:17.505 "max_io_size": 131072, 00:19:17.505 "io_unit_size": 131072, 00:19:17.505 "max_aq_depth": 128, 00:19:17.505 "num_shared_buffers": 511, 00:19:17.505 "buf_cache_size": 4294967295, 00:19:17.505 "dif_insert_or_strip": false, 00:19:17.505 "zcopy": false, 00:19:17.505 "c2h_success": false, 00:19:17.505 "sock_priority": 0, 00:19:17.505 "abort_timeout_sec": 1, 00:19:17.505 "ack_timeout": 0, 00:19:17.505 "data_wr_pool_size": 0 00:19:17.505 } 00:19:17.505 }, 00:19:17.505 { 00:19:17.505 "method": "nvmf_create_subsystem", 00:19:17.505 "params": { 00:19:17.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.505 "allow_any_host": false, 00:19:17.505 "serial_number": "00000000000000000000", 00:19:17.505 "model_number": "SPDK bdev Controller", 00:19:17.505 "max_namespaces": 32, 00:19:17.505 "min_cntlid": 1, 00:19:17.505 "max_cntlid": 65519, 00:19:17.505 "ana_reporting": false 00:19:17.505 } 00:19:17.505 }, 00:19:17.505 { 00:19:17.505 "method": "nvmf_subsystem_add_host", 00:19:17.505 "params": { 00:19:17.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.505 "host": "nqn.2016-06.io.spdk:host1", 00:19:17.505 "psk": "key0" 00:19:17.505 } 00:19:17.505 }, 00:19:17.505 { 00:19:17.505 "method": "nvmf_subsystem_add_ns", 00:19:17.505 "params": { 00:19:17.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.505 "namespace": { 00:19:17.505 "nsid": 1, 00:19:17.505 "bdev_name": "malloc0", 00:19:17.505 "nguid": "C5C423F5BA224303B42B2CB9B5BFEB69", 00:19:17.505 "uuid": "c5c423f5-ba22-4303-b42b-2cb9b5bfeb69", 00:19:17.505 "no_auto_visible": false 00:19:17.505 } 00:19:17.505 } 00:19:17.505 }, 00:19:17.505 { 00:19:17.505 "method": "nvmf_subsystem_add_listener", 00:19:17.505 "params": { 00:19:17.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.505 "listen_address": { 00:19:17.505 "trtype": "TCP", 00:19:17.505 "adrfam": "IPv4", 00:19:17.505 "traddr": "10.0.0.2", 00:19:17.505 "trsvcid": "4420" 00:19:17.505 }, 00:19:17.505 "secure_channel": false, 00:19:17.505 "sock_impl": "ssl" 00:19:17.505 } 00:19:17.505 } 00:19:17.505 ] 00:19:17.505 } 00:19:17.505 ] 00:19:17.505 }' 00:19:17.505 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:17.839 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:17.839 "subsystems": [ 00:19:17.839 { 00:19:17.839 "subsystem": "keyring", 00:19:17.839 "config": [ 00:19:17.839 { 00:19:17.839 "method": "keyring_file_add_key", 00:19:17.839 "params": { 00:19:17.839 "name": "key0", 00:19:17.839 "path": "/tmp/tmp.hYEkwJb2VO" 00:19:17.839 } 00:19:17.839 } 00:19:17.839 ] 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "subsystem": "iobuf", 00:19:17.839 "config": [ 00:19:17.839 { 00:19:17.839 "method": "iobuf_set_options", 00:19:17.839 "params": { 00:19:17.839 "small_pool_count": 8192, 00:19:17.839 "large_pool_count": 1024, 00:19:17.839 "small_bufsize": 8192, 00:19:17.839 "large_bufsize": 135168, 00:19:17.839 "enable_numa": false 00:19:17.839 } 00:19:17.839 } 00:19:17.839 ] 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "subsystem": "sock", 00:19:17.839 "config": [ 00:19:17.839 { 00:19:17.839 "method": "sock_set_default_impl", 00:19:17.839 "params": { 00:19:17.839 "impl_name": "posix" 00:19:17.839 } 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "method": "sock_impl_set_options", 00:19:17.839 "params": { 00:19:17.839 "impl_name": "ssl", 00:19:17.839 "recv_buf_size": 4096, 00:19:17.839 "send_buf_size": 4096, 00:19:17.839 "enable_recv_pipe": true, 00:19:17.839 "enable_quickack": false, 00:19:17.839 "enable_placement_id": 0, 00:19:17.839 "enable_zerocopy_send_server": true, 00:19:17.839 "enable_zerocopy_send_client": false, 00:19:17.839 "zerocopy_threshold": 0, 00:19:17.839 "tls_version": 0, 00:19:17.839 "enable_ktls": false 00:19:17.839 } 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "method": "sock_impl_set_options", 00:19:17.839 "params": { 00:19:17.839 "impl_name": "posix", 00:19:17.839 "recv_buf_size": 2097152, 00:19:17.839 "send_buf_size": 2097152, 00:19:17.839 "enable_recv_pipe": true, 00:19:17.839 "enable_quickack": false, 00:19:17.839 "enable_placement_id": 0, 00:19:17.839 "enable_zerocopy_send_server": true, 00:19:17.839 "enable_zerocopy_send_client": false, 00:19:17.839 "zerocopy_threshold": 0, 00:19:17.839 "tls_version": 0, 00:19:17.839 "enable_ktls": false 00:19:17.839 } 00:19:17.839 } 00:19:17.839 ] 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "subsystem": "vmd", 00:19:17.839 "config": [] 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "subsystem": "accel", 00:19:17.839 "config": [ 00:19:17.839 { 00:19:17.839 "method": "accel_set_options", 00:19:17.839 "params": { 00:19:17.839 "small_cache_size": 128, 00:19:17.839 "large_cache_size": 16, 00:19:17.839 "task_count": 2048, 00:19:17.839 "sequence_count": 2048, 00:19:17.839 "buf_count": 2048 00:19:17.839 } 00:19:17.839 } 00:19:17.839 ] 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "subsystem": "bdev", 00:19:17.839 "config": [ 00:19:17.839 { 00:19:17.839 "method": "bdev_set_options", 00:19:17.839 "params": { 00:19:17.839 "bdev_io_pool_size": 65535, 00:19:17.839 "bdev_io_cache_size": 256, 00:19:17.839 "bdev_auto_examine": true, 00:19:17.839 "iobuf_small_cache_size": 128, 00:19:17.839 "iobuf_large_cache_size": 16 00:19:17.839 } 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "method": "bdev_raid_set_options", 00:19:17.839 "params": { 00:19:17.839 "process_window_size_kb": 1024, 00:19:17.839 "process_max_bandwidth_mb_sec": 0 00:19:17.839 } 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "method": "bdev_iscsi_set_options", 00:19:17.839 "params": { 00:19:17.839 "timeout_sec": 30 00:19:17.839 } 00:19:17.839 }, 00:19:17.839 { 00:19:17.839 "method": "bdev_nvme_set_options", 00:19:17.839 "params": { 00:19:17.839 "action_on_timeout": "none", 00:19:17.839 "timeout_us": 0, 00:19:17.839 "timeout_admin_us": 0, 00:19:17.839 "keep_alive_timeout_ms": 10000, 00:19:17.839 "arbitration_burst": 0, 00:19:17.839 "low_priority_weight": 0, 00:19:17.839 "medium_priority_weight": 0, 00:19:17.839 "high_priority_weight": 0, 00:19:17.839 "nvme_adminq_poll_period_us": 10000, 00:19:17.839 "nvme_ioq_poll_period_us": 0, 00:19:17.840 "io_queue_requests": 512, 00:19:17.840 "delay_cmd_submit": true, 00:19:17.840 "transport_retry_count": 4, 00:19:17.840 "bdev_retry_count": 3, 00:19:17.840 "transport_ack_timeout": 0, 00:19:17.840 "ctrlr_loss_timeout_sec": 0, 00:19:17.840 "reconnect_delay_sec": 0, 00:19:17.840 "fast_io_fail_timeout_sec": 0, 00:19:17.840 "disable_auto_failback": false, 00:19:17.840 "generate_uuids": false, 00:19:17.840 "transport_tos": 0, 00:19:17.840 "nvme_error_stat": false, 00:19:17.840 "rdma_srq_size": 0, 00:19:17.840 "io_path_stat": false, 00:19:17.840 "allow_accel_sequence": false, 00:19:17.840 "rdma_max_cq_size": 0, 00:19:17.840 "rdma_cm_event_timeout_ms": 0, 00:19:17.840 "dhchap_digests": [ 00:19:17.840 "sha256", 00:19:17.840 "sha384", 00:19:17.840 "sha512" 00:19:17.840 ], 00:19:17.840 "dhchap_dhgroups": [ 00:19:17.840 "null", 00:19:17.840 "ffdhe2048", 00:19:17.840 "ffdhe3072", 00:19:17.840 "ffdhe4096", 00:19:17.840 "ffdhe6144", 00:19:17.840 "ffdhe8192" 00:19:17.840 ] 00:19:17.840 } 00:19:17.840 }, 00:19:17.840 { 00:19:17.840 "method": "bdev_nvme_attach_controller", 00:19:17.840 "params": { 00:19:17.840 "name": "nvme0", 00:19:17.840 "trtype": "TCP", 00:19:17.840 "adrfam": "IPv4", 00:19:17.840 "traddr": "10.0.0.2", 00:19:17.840 "trsvcid": "4420", 00:19:17.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.840 "prchk_reftag": false, 00:19:17.840 "prchk_guard": false, 00:19:17.840 "ctrlr_loss_timeout_sec": 0, 00:19:17.840 "reconnect_delay_sec": 0, 00:19:17.840 "fast_io_fail_timeout_sec": 0, 00:19:17.840 "psk": "key0", 00:19:17.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.840 "hdgst": false, 00:19:17.840 "ddgst": false, 00:19:17.840 "multipath": "multipath" 00:19:17.840 } 00:19:17.840 }, 00:19:17.840 { 00:19:17.840 "method": "bdev_nvme_set_hotplug", 00:19:17.840 "params": { 00:19:17.840 "period_us": 100000, 00:19:17.840 "enable": false 00:19:17.840 } 00:19:17.840 }, 00:19:17.840 { 00:19:17.840 "method": "bdev_enable_histogram", 00:19:17.840 "params": { 00:19:17.840 "name": "nvme0n1", 00:19:17.840 "enable": true 00:19:17.840 } 00:19:17.840 }, 00:19:17.840 { 00:19:17.840 "method": "bdev_wait_for_examine" 00:19:17.840 } 00:19:17.840 ] 00:19:17.840 }, 00:19:17.840 { 00:19:17.840 "subsystem": "nbd", 00:19:17.840 "config": [] 00:19:17.840 } 00:19:17.840 ] 00:19:17.840 }' 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1744451 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1744451 ']' 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1744451 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744451 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744451' 00:19:17.840 killing process with pid 1744451 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1744451 00:19:17.840 Received shutdown signal, test time was about 1.000000 seconds 00:19:17.840 00:19:17.840 Latency(us) 00:19:17.840 [2024-12-06T10:20:50.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.840 [2024-12-06T10:20:50.778Z] =================================================================================================================== 00:19:17.840 [2024-12-06T10:20:50.778Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.840 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1744451 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1744179 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1744179 ']' 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1744179 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744179 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744179' 00:19:18.100 killing process with pid 1744179 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1744179 00:19:18.100 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1744179 00:19:18.360 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:18.360 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.360 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.360 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:18.360 "subsystems": [ 00:19:18.360 { 00:19:18.360 "subsystem": "keyring", 00:19:18.360 "config": [ 00:19:18.360 { 00:19:18.360 "method": "keyring_file_add_key", 00:19:18.360 "params": { 00:19:18.360 "name": "key0", 00:19:18.360 "path": "/tmp/tmp.hYEkwJb2VO" 00:19:18.360 } 00:19:18.360 } 00:19:18.360 ] 00:19:18.360 }, 00:19:18.360 { 00:19:18.360 "subsystem": "iobuf", 00:19:18.360 "config": [ 00:19:18.360 { 00:19:18.360 "method": "iobuf_set_options", 00:19:18.360 "params": { 00:19:18.360 "small_pool_count": 8192, 00:19:18.360 "large_pool_count": 1024, 00:19:18.360 "small_bufsize": 8192, 00:19:18.360 "large_bufsize": 135168, 00:19:18.360 "enable_numa": false 00:19:18.360 } 00:19:18.360 } 00:19:18.360 ] 00:19:18.360 }, 00:19:18.360 { 00:19:18.360 "subsystem": "sock", 00:19:18.360 "config": [ 00:19:18.360 { 00:19:18.360 "method": "sock_set_default_impl", 00:19:18.360 "params": { 00:19:18.360 "impl_name": "posix" 00:19:18.360 } 00:19:18.360 }, 00:19:18.360 { 00:19:18.360 "method": "sock_impl_set_options", 00:19:18.360 "params": { 00:19:18.360 "impl_name": "ssl", 00:19:18.360 "recv_buf_size": 4096, 00:19:18.360 "send_buf_size": 4096, 00:19:18.360 "enable_recv_pipe": true, 00:19:18.360 "enable_quickack": false, 00:19:18.360 "enable_placement_id": 0, 00:19:18.360 "enable_zerocopy_send_server": true, 00:19:18.360 "enable_zerocopy_send_client": false, 00:19:18.360 "zerocopy_threshold": 0, 00:19:18.360 "tls_version": 0, 00:19:18.360 "enable_ktls": false 00:19:18.360 } 00:19:18.360 }, 00:19:18.360 { 00:19:18.360 "method": "sock_impl_set_options", 00:19:18.360 "params": { 00:19:18.360 "impl_name": "posix", 00:19:18.360 "recv_buf_size": 2097152, 00:19:18.360 "send_buf_size": 2097152, 00:19:18.360 "enable_recv_pipe": true, 00:19:18.360 "enable_quickack": false, 00:19:18.360 "enable_placement_id": 0, 00:19:18.360 "enable_zerocopy_send_server": true, 00:19:18.360 "enable_zerocopy_send_client": false, 00:19:18.360 "zerocopy_threshold": 0, 00:19:18.360 "tls_version": 0, 00:19:18.360 "enable_ktls": false 00:19:18.360 } 00:19:18.360 } 00:19:18.360 ] 00:19:18.360 }, 00:19:18.360 { 00:19:18.360 "subsystem": "vmd", 00:19:18.360 "config": [] 00:19:18.360 }, 00:19:18.360 { 00:19:18.360 "subsystem": "accel", 00:19:18.360 "config": [ 00:19:18.360 { 00:19:18.360 "method": "accel_set_options", 00:19:18.360 "params": { 00:19:18.360 "small_cache_size": 128, 00:19:18.360 "large_cache_size": 16, 00:19:18.360 "task_count": 2048, 00:19:18.360 "sequence_count": 2048, 00:19:18.360 "buf_count": 2048 00:19:18.360 } 00:19:18.360 } 00:19:18.360 ] 00:19:18.360 }, 00:19:18.360 { 00:19:18.360 "subsystem": "bdev", 00:19:18.360 "config": [ 00:19:18.360 { 00:19:18.360 "method": "bdev_set_options", 00:19:18.360 "params": { 00:19:18.360 "bdev_io_pool_size": 65535, 00:19:18.360 "bdev_io_cache_size": 256, 00:19:18.360 "bdev_auto_examine": true, 00:19:18.360 "iobuf_small_cache_size": 128, 00:19:18.360 "iobuf_large_cache_size": 16 00:19:18.360 } 00:19:18.360 }, 00:19:18.360 { 00:19:18.360 "method": "bdev_raid_set_options", 00:19:18.360 "params": { 00:19:18.360 "process_window_size_kb": 1024, 00:19:18.360 "process_max_bandwidth_mb_sec": 0 00:19:18.360 } 00:19:18.360 }, 00:19:18.360 { 00:19:18.361 "method": "bdev_iscsi_set_options", 00:19:18.361 "params": { 00:19:18.361 "timeout_sec": 30 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "bdev_nvme_set_options", 00:19:18.361 "params": { 00:19:18.361 "action_on_timeout": "none", 00:19:18.361 "timeout_us": 0, 00:19:18.361 "timeout_admin_us": 0, 00:19:18.361 "keep_alive_timeout_ms": 10000, 00:19:18.361 "arbitration_burst": 0, 00:19:18.361 "low_priority_weight": 0, 00:19:18.361 "medium_priority_weight": 0, 00:19:18.361 "high_priority_weight": 0, 00:19:18.361 "nvme_adminq_poll_period_us": 10000, 00:19:18.361 "nvme_ioq_poll_period_us": 0, 00:19:18.361 "io_queue_requests": 0, 00:19:18.361 "delay_cmd_submit": true, 00:19:18.361 "transport_retry_count": 4, 00:19:18.361 "bdev_retry_count": 3, 00:19:18.361 "transport_ack_timeout": 0, 00:19:18.361 "ctrlr_loss_timeout_sec": 0, 00:19:18.361 "reconnect_delay_sec": 0, 00:19:18.361 "fast_io_fail_timeout_sec": 0, 00:19:18.361 "disable_auto_failback": false, 00:19:18.361 "generate_uuids": false, 00:19:18.361 "transport_tos": 0, 00:19:18.361 "nvme_error_stat": false, 00:19:18.361 "rdma_srq_size": 0, 00:19:18.361 "io_path_stat": false, 00:19:18.361 "allow_accel_sequence": false, 00:19:18.361 "rdma_max_cq_size": 0, 00:19:18.361 "rdma_cm_event_timeout_ms": 0, 00:19:18.361 "dhchap_digests": [ 00:19:18.361 "sha256", 00:19:18.361 "sha384", 00:19:18.361 "sha512" 00:19:18.361 ], 00:19:18.361 "dhchap_dhgroups": [ 00:19:18.361 "null", 00:19:18.361 "ffdhe2048", 00:19:18.361 "ffdhe3072", 00:19:18.361 "ffdhe4096", 00:19:18.361 "ffdhe6144", 00:19:18.361 "ffdhe8192" 00:19:18.361 ] 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "bdev_nvme_set_hotplug", 00:19:18.361 "params": { 00:19:18.361 "period_us": 100000, 00:19:18.361 "enable": false 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "bdev_malloc_create", 00:19:18.361 "params": { 00:19:18.361 "name": "malloc0", 00:19:18.361 "num_blocks": 8192, 00:19:18.361 "block_size": 4096, 00:19:18.361 "physical_block_size": 4096, 00:19:18.361 "uuid": "c5c423f5-ba22-4303-b42b-2cb9b5bfeb69", 00:19:18.361 "optimal_io_boundary": 0, 00:19:18.361 "md_size": 0, 00:19:18.361 "dif_type": 0, 00:19:18.361 "dif_is_head_of_md": false, 00:19:18.361 "dif_pi_format": 0 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "bdev_wait_for_examine" 00:19:18.361 } 00:19:18.361 ] 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "subsystem": "nbd", 00:19:18.361 "config": [] 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "subsystem": "scheduler", 00:19:18.361 "config": [ 00:19:18.361 { 00:19:18.361 "method": "framework_set_scheduler", 00:19:18.361 "params": { 00:19:18.361 "name": "static" 00:19:18.361 } 00:19:18.361 } 00:19:18.361 ] 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "subsystem": "nvmf", 00:19:18.361 "config": [ 00:19:18.361 { 00:19:18.361 "method": "nvmf_set_config", 00:19:18.361 "params": { 00:19:18.361 "discovery_filter": "match_any", 00:19:18.361 "admin_cmd_passthru": { 00:19:18.361 "identify_ctrlr": false 00:19:18.361 }, 00:19:18.361 "dhchap_digests": [ 00:19:18.361 "sha256", 00:19:18.361 "sha384", 00:19:18.361 "sha512" 00:19:18.361 ], 00:19:18.361 "dhchap_dhgroups": [ 00:19:18.361 "null", 00:19:18.361 "ffdhe2048", 00:19:18.361 "ffdhe3072", 00:19:18.361 "ffdhe4096", 00:19:18.361 "ffdhe6144", 00:19:18.361 "ffdhe8192" 00:19:18.361 ] 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "nvmf_set_max_subsystems", 00:19:18.361 "params": { 00:19:18.361 "max_subsystems": 1024 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "nvmf_set_crdt", 00:19:18.361 "params": { 00:19:18.361 "crdt1": 0, 00:19:18.361 "crdt2": 0, 00:19:18.361 "crdt3": 0 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "nvmf_create_transport", 00:19:18.361 "params": { 00:19:18.361 "trtype": "TCP", 00:19:18.361 "max_queue_depth": 128, 00:19:18.361 "max_io_qpairs_per_ctrlr": 127, 00:19:18.361 "in_capsule_data_size": 4096, 00:19:18.361 "max_io_size": 131072, 00:19:18.361 "io_unit_size": 131072, 00:19:18.361 "max_aq_depth": 128, 00:19:18.361 "num_shared_buffers": 511, 00:19:18.361 "buf_cache_size": 4294967295, 00:19:18.361 "dif_insert_or_strip": false, 00:19:18.361 "zcopy": false, 00:19:18.361 "c2h_success": false, 00:19:18.361 "sock_priority": 0, 00:19:18.361 "abort_timeout_sec": 1, 00:19:18.361 "ack_timeout": 0, 00:19:18.361 "data_wr_pool_size": 0 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "nvmf_create_subsystem", 00:19:18.361 "params": { 00:19:18.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.361 "allow_any_host": false, 00:19:18.361 "serial_number": "00000000000000000000", 00:19:18.361 "model_number": "SPDK bdev Controller", 00:19:18.361 "max_namespaces": 32, 00:19:18.361 "min_cntlid": 1, 00:19:18.361 "max_cntlid": 65519, 00:19:18.361 "ana_reporting": false 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "nvmf_subsystem_add_host", 00:19:18.361 "params": { 00:19:18.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.361 "host": "nqn.2016-06.io.spdk:host1", 00:19:18.361 "psk": "key0" 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "nvmf_subsystem_add_ns", 00:19:18.361 "params": { 00:19:18.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.361 "namespace": { 00:19:18.361 "nsid": 1, 00:19:18.361 "bdev_name": "malloc0", 00:19:18.361 "nguid": "C5C423F5BA224303B42B2CB9B5BFEB69", 00:19:18.361 "uuid": "c5c423f5-ba22-4303-b42b-2cb9b5bfeb69", 00:19:18.361 "no_auto_visible": false 00:19:18.361 } 00:19:18.361 } 00:19:18.361 }, 00:19:18.361 { 00:19:18.361 "method": "nvmf_subsystem_add_listener", 00:19:18.361 "params": { 00:19:18.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.361 "listen_address": { 00:19:18.361 "trtype": "TCP", 00:19:18.361 "adrfam": "IPv4", 00:19:18.361 "traddr": "10.0.0.2", 00:19:18.361 "trsvcid": "4420" 00:19:18.361 }, 00:19:18.361 "secure_channel": false, 00:19:18.361 "sock_impl": "ssl" 00:19:18.361 } 00:19:18.361 } 00:19:18.361 ] 00:19:18.361 } 00:19:18.361 ] 00:19:18.361 }' 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1744810 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1744810 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1744810 ']' 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.361 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.361 [2024-12-06 11:20:51.141588] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:19:18.361 [2024-12-06 11:20:51.141629] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.361 [2024-12-06 11:20:51.215201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.361 [2024-12-06 11:20:51.250166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.361 [2024-12-06 11:20:51.250201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.361 [2024-12-06 11:20:51.250208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.361 [2024-12-06 11:20:51.250213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.361 [2024-12-06 11:20:51.250218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.361 [2024-12-06 11:20:51.250811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.621 [2024-12-06 11:20:51.462962] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.621 [2024-12-06 11:20:51.494998] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.621 [2024-12-06 11:20:51.495200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1745030 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1745030 /var/tmp/bdevperf.sock 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1745030 ']' 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.189 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:19.189 "subsystems": [ 00:19:19.189 { 00:19:19.189 "subsystem": "keyring", 00:19:19.189 "config": [ 00:19:19.189 { 00:19:19.189 "method": "keyring_file_add_key", 00:19:19.189 "params": { 00:19:19.189 "name": "key0", 00:19:19.189 "path": "/tmp/tmp.hYEkwJb2VO" 00:19:19.189 } 00:19:19.189 } 00:19:19.189 ] 00:19:19.189 }, 00:19:19.189 { 00:19:19.189 "subsystem": "iobuf", 00:19:19.189 "config": [ 00:19:19.189 { 00:19:19.189 "method": "iobuf_set_options", 00:19:19.189 "params": { 00:19:19.189 "small_pool_count": 8192, 00:19:19.189 "large_pool_count": 1024, 00:19:19.189 "small_bufsize": 8192, 00:19:19.189 "large_bufsize": 135168, 00:19:19.189 "enable_numa": false 00:19:19.189 } 00:19:19.189 } 00:19:19.189 ] 00:19:19.189 }, 00:19:19.189 { 00:19:19.189 "subsystem": "sock", 00:19:19.189 "config": [ 00:19:19.189 { 00:19:19.189 "method": "sock_set_default_impl", 00:19:19.189 "params": { 00:19:19.189 "impl_name": "posix" 00:19:19.189 } 00:19:19.189 }, 00:19:19.189 { 00:19:19.189 "method": "sock_impl_set_options", 00:19:19.189 "params": { 00:19:19.189 "impl_name": "ssl", 00:19:19.189 "recv_buf_size": 4096, 00:19:19.189 "send_buf_size": 4096, 00:19:19.189 "enable_recv_pipe": true, 00:19:19.189 "enable_quickack": false, 00:19:19.189 "enable_placement_id": 0, 00:19:19.189 "enable_zerocopy_send_server": true, 00:19:19.189 "enable_zerocopy_send_client": false, 00:19:19.189 "zerocopy_threshold": 0, 00:19:19.189 "tls_version": 0, 00:19:19.189 "enable_ktls": false 00:19:19.189 } 00:19:19.189 }, 00:19:19.189 { 00:19:19.189 "method": "sock_impl_set_options", 00:19:19.189 "params": { 00:19:19.189 "impl_name": "posix", 00:19:19.189 "recv_buf_size": 2097152, 00:19:19.189 "send_buf_size": 2097152, 00:19:19.189 "enable_recv_pipe": true, 00:19:19.189 "enable_quickack": false, 00:19:19.189 "enable_placement_id": 0, 00:19:19.189 "enable_zerocopy_send_server": true, 00:19:19.189 "enable_zerocopy_send_client": false, 00:19:19.189 "zerocopy_threshold": 0, 00:19:19.189 "tls_version": 0, 00:19:19.189 "enable_ktls": false 00:19:19.189 } 00:19:19.189 } 00:19:19.189 ] 00:19:19.189 }, 00:19:19.189 { 00:19:19.189 "subsystem": "vmd", 00:19:19.189 "config": [] 00:19:19.189 }, 00:19:19.189 { 00:19:19.189 "subsystem": "accel", 00:19:19.189 "config": [ 00:19:19.189 { 00:19:19.189 "method": "accel_set_options", 00:19:19.189 "params": { 00:19:19.189 "small_cache_size": 128, 00:19:19.189 "large_cache_size": 16, 00:19:19.189 "task_count": 2048, 00:19:19.190 "sequence_count": 2048, 00:19:19.190 "buf_count": 2048 00:19:19.190 } 00:19:19.190 } 00:19:19.190 ] 00:19:19.190 }, 00:19:19.190 { 00:19:19.190 "subsystem": "bdev", 00:19:19.190 "config": [ 00:19:19.190 { 00:19:19.190 "method": "bdev_set_options", 00:19:19.190 "params": { 00:19:19.190 "bdev_io_pool_size": 65535, 00:19:19.190 "bdev_io_cache_size": 256, 00:19:19.190 "bdev_auto_examine": true, 00:19:19.190 "iobuf_small_cache_size": 128, 00:19:19.190 "iobuf_large_cache_size": 16 00:19:19.190 } 00:19:19.190 }, 00:19:19.190 { 00:19:19.190 "method": "bdev_raid_set_options", 00:19:19.190 "params": { 00:19:19.190 "process_window_size_kb": 1024, 00:19:19.190 "process_max_bandwidth_mb_sec": 0 00:19:19.190 } 00:19:19.190 }, 00:19:19.190 { 00:19:19.190 "method": "bdev_iscsi_set_options", 00:19:19.190 "params": { 00:19:19.190 "timeout_sec": 30 00:19:19.190 } 00:19:19.190 }, 00:19:19.190 { 00:19:19.190 "method": "bdev_nvme_set_options", 00:19:19.190 "params": { 00:19:19.190 "action_on_timeout": "none", 00:19:19.190 "timeout_us": 0, 00:19:19.190 "timeout_admin_us": 0, 00:19:19.190 "keep_alive_timeout_ms": 10000, 00:19:19.190 "arbitration_burst": 0, 00:19:19.190 "low_priority_weight": 0, 00:19:19.190 "medium_priority_weight": 0, 00:19:19.190 "high_priority_weight": 0, 00:19:19.190 "nvme_adminq_poll_period_us": 10000, 00:19:19.190 "nvme_ioq_poll_period_us": 0, 00:19:19.190 "io_queue_requests": 512, 00:19:19.190 "delay_cmd_submit": true, 00:19:19.190 "transport_retry_count": 4, 00:19:19.190 "bdev_retry_count": 3, 00:19:19.190 "transport_ack_timeout": 0, 00:19:19.190 "ctrlr_loss_timeout_sec": 0, 00:19:19.190 "reconnect_delay_sec": 0, 00:19:19.190 "fast_io_fail_timeout_sec": 0, 00:19:19.190 "disable_auto_failback": false, 00:19:19.190 "generate_uuids": false, 00:19:19.190 "transport_tos": 0, 00:19:19.190 "nvme_error_stat": false, 00:19:19.190 "rdma_srq_size": 0, 00:19:19.190 "io_path_stat": false, 00:19:19.190 "allow_accel_sequence": false, 00:19:19.190 "rdma_max_cq_size": 0, 00:19:19.190 "rdma_cm_event_timeout_ms": 0, 00:19:19.190 "dhchap_digests": [ 00:19:19.190 "sha256", 00:19:19.190 "sha384", 00:19:19.190 "sha512" 00:19:19.190 ], 00:19:19.190 "dhchap_dhgroups": [ 00:19:19.190 "null", 00:19:19.190 "ffdhe2048", 00:19:19.190 "ffdhe3072", 00:19:19.190 "ffdhe4096", 00:19:19.190 "ffdhe6144", 00:19:19.190 "ffdhe8192" 00:19:19.190 ] 00:19:19.190 } 00:19:19.190 }, 00:19:19.190 { 00:19:19.190 "method": "bdev_nvme_attach_controller", 00:19:19.190 "params": { 00:19:19.190 "name": "nvme0", 00:19:19.190 "trtype": "TCP", 00:19:19.190 "adrfam": "IPv4", 00:19:19.190 "traddr": "10.0.0.2", 00:19:19.190 "trsvcid": "4420", 00:19:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.190 "prchk_reftag": false, 00:19:19.190 "prchk_guard": false, 00:19:19.190 "ctrlr_loss_timeout_sec": 0, 00:19:19.190 "reconnect_delay_sec": 0, 00:19:19.190 "fast_io_fail_timeout_sec": 0, 00:19:19.190 "psk": "key0", 00:19:19.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.190 "hdgst": false, 00:19:19.190 "ddgst": false, 00:19:19.190 "multipath": "multipath" 00:19:19.190 } 00:19:19.190 }, 00:19:19.190 { 00:19:19.190 "method": "bdev_nvme_set_hotplug", 00:19:19.190 "params": { 00:19:19.190 "period_us": 100000, 00:19:19.190 "enable": false 00:19:19.190 } 00:19:19.190 }, 00:19:19.190 { 00:19:19.190 "method": "bdev_enable_histogram", 00:19:19.190 "params": { 00:19:19.190 "name": "nvme0n1", 00:19:19.190 "enable": true 00:19:19.190 } 00:19:19.190 }, 00:19:19.190 { 00:19:19.190 "method": "bdev_wait_for_examine" 00:19:19.190 } 00:19:19.190 ] 00:19:19.190 }, 00:19:19.190 { 00:19:19.190 "subsystem": "nbd", 00:19:19.190 "config": [] 00:19:19.190 } 00:19:19.190 ] 00:19:19.190 }' 00:19:19.190 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.190 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.190 [2024-12-06 11:20:52.028457] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:19:19.190 [2024-12-06 11:20:52.028500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745030 ] 00:19:19.190 [2024-12-06 11:20:52.098406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.449 [2024-12-06 11:20:52.138317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.449 [2024-12-06 11:20:52.291136] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.016 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.016 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:20.016 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:20.016 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:20.274 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.274 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.274 Running I/O for 1 seconds... 00:19:21.212 5773.00 IOPS, 22.55 MiB/s 00:19:21.212 Latency(us) 00:19:21.212 [2024-12-06T10:20:54.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.212 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:21.212 Verification LBA range: start 0x0 length 0x2000 00:19:21.212 nvme0n1 : 1.01 5820.90 22.74 0.00 0.00 21847.03 5659.93 26810.18 00:19:21.212 [2024-12-06T10:20:54.150Z] =================================================================================================================== 00:19:21.212 [2024-12-06T10:20:54.150Z] Total : 5820.90 22.74 0.00 0.00 21847.03 5659.93 26810.18 00:19:21.212 { 00:19:21.212 "results": [ 00:19:21.212 { 00:19:21.212 "job": "nvme0n1", 00:19:21.212 "core_mask": "0x2", 00:19:21.212 "workload": "verify", 00:19:21.212 "status": "finished", 00:19:21.212 "verify_range": { 00:19:21.212 "start": 0, 00:19:21.212 "length": 8192 00:19:21.212 }, 00:19:21.212 "queue_depth": 128, 00:19:21.212 "io_size": 4096, 00:19:21.212 "runtime": 1.013933, 00:19:21.212 "iops": 5820.897436023879, 00:19:21.212 "mibps": 22.737880609468277, 00:19:21.212 "io_failed": 0, 00:19:21.212 "io_timeout": 0, 00:19:21.212 "avg_latency_us": 21847.030063152706, 00:19:21.212 "min_latency_us": 5659.927272727273, 00:19:21.212 "max_latency_us": 26810.18181818182 00:19:21.212 } 00:19:21.212 ], 00:19:21.212 "core_count": 1 00:19:21.212 } 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:21.471 nvmf_trace.0 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1745030 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1745030 ']' 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1745030 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1745030 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1745030' 00:19:21.471 killing process with pid 1745030 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1745030 00:19:21.471 Received shutdown signal, test time was about 1.000000 seconds 00:19:21.471 00:19:21.471 Latency(us) 00:19:21.471 [2024-12-06T10:20:54.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.471 [2024-12-06T10:20:54.409Z] =================================================================================================================== 00:19:21.471 [2024-12-06T10:20:54.409Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.471 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1745030 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.731 rmmod nvme_tcp 00:19:21.731 rmmod nvme_fabrics 00:19:21.731 rmmod nvme_keyring 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1744810 ']' 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1744810 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1744810 ']' 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1744810 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744810 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744810' 00:19:21.731 killing process with pid 1744810 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1744810 00:19:21.731 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1744810 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.990 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.892 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:23.892 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.wk1GYd8JNr /tmp/tmp.KGZksruLEB /tmp/tmp.hYEkwJb2VO 00:19:23.892 00:19:23.892 real 1m21.315s 00:19:23.892 user 2m1.041s 00:19:23.892 sys 0m32.640s 00:19:23.892 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.892 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.892 ************************************ 00:19:23.892 END TEST nvmf_tls 00:19:23.892 ************************************ 00:19:24.150 11:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:24.150 11:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.150 11:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.150 11:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.150 ************************************ 00:19:24.150 START TEST nvmf_fips 00:19:24.150 ************************************ 00:19:24.150 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:24.150 * Looking for test storage... 00:19:24.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:24.150 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:24.150 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:24.150 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:24.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.150 --rc genhtml_branch_coverage=1 00:19:24.150 --rc genhtml_function_coverage=1 00:19:24.150 --rc genhtml_legend=1 00:19:24.150 --rc geninfo_all_blocks=1 00:19:24.150 --rc geninfo_unexecuted_blocks=1 00:19:24.150 00:19:24.150 ' 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:24.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.150 --rc genhtml_branch_coverage=1 00:19:24.150 --rc genhtml_function_coverage=1 00:19:24.150 --rc genhtml_legend=1 00:19:24.150 --rc geninfo_all_blocks=1 00:19:24.150 --rc geninfo_unexecuted_blocks=1 00:19:24.150 00:19:24.150 ' 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:24.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.150 --rc genhtml_branch_coverage=1 00:19:24.150 --rc genhtml_function_coverage=1 00:19:24.150 --rc genhtml_legend=1 00:19:24.150 --rc geninfo_all_blocks=1 00:19:24.150 --rc geninfo_unexecuted_blocks=1 00:19:24.150 00:19:24.150 ' 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:24.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.150 --rc genhtml_branch_coverage=1 00:19:24.150 --rc genhtml_function_coverage=1 00:19:24.150 --rc genhtml_legend=1 00:19:24.150 --rc geninfo_all_blocks=1 00:19:24.150 --rc geninfo_unexecuted_blocks=1 00:19:24.150 00:19:24.150 ' 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.150 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:24.409 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:24.410 Error setting digest 00:19:24.410 4052EF58E07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:24.410 4052EF58E07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:24.410 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.983 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:30.984 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:30.984 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:30.984 Found net devices under 0000:af:00.0: cvl_0_0 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:30.984 Found net devices under 0000:af:00.1: cvl_0_1 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.984 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:30.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:19:30.984 00:19:30.984 --- 10.0.0.2 ping statistics --- 00:19:30.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.984 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:19:30.984 00:19:30.984 --- 10.0.0.1 ping statistics --- 00:19:30.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.984 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1749345 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1749345 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1749345 ']' 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.984 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:30.984 [2024-12-06 11:21:03.326584] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:19:30.985 [2024-12-06 11:21:03.326628] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.985 [2024-12-06 11:21:03.403194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.985 [2024-12-06 11:21:03.442824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.985 [2024-12-06 11:21:03.442855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.985 [2024-12-06 11:21:03.442862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.985 [2024-12-06 11:21:03.442867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.985 [2024-12-06 11:21:03.442872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.985 [2024-12-06 11:21:03.443429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.kyS 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.kyS 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.kyS 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.kyS 00:19:31.244 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:31.503 [2024-12-06 11:21:04.322954] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.503 [2024-12-06 11:21:04.338960] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.503 [2024-12-06 11:21:04.339151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.503 malloc0 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1749477 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1749477 /var/tmp/bdevperf.sock 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1749477 ']' 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.503 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:31.762 [2024-12-06 11:21:04.453257] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:19:31.762 [2024-12-06 11:21:04.453301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749477 ] 00:19:31.762 [2024-12-06 11:21:04.528262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.762 [2024-12-06 11:21:04.565990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.762 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.762 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:31.762 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.kyS 00:19:32.021 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.280 [2024-12-06 11:21:05.013926] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.280 TLSTESTn1 00:19:32.280 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.280 Running I/O for 10 seconds... 00:19:34.588 5781.00 IOPS, 22.58 MiB/s [2024-12-06T10:21:08.459Z] 5825.50 IOPS, 22.76 MiB/s [2024-12-06T10:21:09.394Z] 5754.00 IOPS, 22.48 MiB/s [2024-12-06T10:21:10.330Z] 5777.25 IOPS, 22.57 MiB/s [2024-12-06T10:21:11.268Z] 5758.60 IOPS, 22.49 MiB/s [2024-12-06T10:21:12.647Z] 5782.33 IOPS, 22.59 MiB/s [2024-12-06T10:21:13.584Z] 5704.14 IOPS, 22.28 MiB/s [2024-12-06T10:21:14.522Z] 5630.50 IOPS, 21.99 MiB/s [2024-12-06T10:21:15.458Z] 5584.89 IOPS, 21.82 MiB/s [2024-12-06T10:21:15.458Z] 5525.10 IOPS, 21.58 MiB/s 00:19:42.520 Latency(us) 00:19:42.520 [2024-12-06T10:21:15.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.520 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:42.520 Verification LBA range: start 0x0 length 0x2000 00:19:42.520 TLSTESTn1 : 10.02 5529.34 21.60 0.00 0.00 23116.31 6553.60 30146.56 00:19:42.520 [2024-12-06T10:21:15.458Z] =================================================================================================================== 00:19:42.520 [2024-12-06T10:21:15.458Z] Total : 5529.34 21.60 0.00 0.00 23116.31 6553.60 30146.56 00:19:42.520 { 00:19:42.520 "results": [ 00:19:42.520 { 00:19:42.520 "job": "TLSTESTn1", 00:19:42.520 "core_mask": "0x4", 00:19:42.520 "workload": "verify", 00:19:42.520 "status": "finished", 00:19:42.520 "verify_range": { 00:19:42.520 "start": 0, 00:19:42.520 "length": 8192 00:19:42.520 }, 00:19:42.520 "queue_depth": 128, 00:19:42.520 "io_size": 4096, 00:19:42.520 "runtime": 10.015293, 00:19:42.520 "iops": 5529.343974260164, 00:19:42.520 "mibps": 21.598999899453766, 00:19:42.520 "io_failed": 0, 00:19:42.520 "io_timeout": 0, 00:19:42.520 "avg_latency_us": 23116.305497621306, 00:19:42.520 "min_latency_us": 6553.6, 00:19:42.520 "max_latency_us": 30146.56 00:19:42.520 } 00:19:42.520 ], 00:19:42.520 "core_count": 1 00:19:42.520 } 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:42.520 nvmf_trace.0 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1749477 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1749477 ']' 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1749477 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749477 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749477' 00:19:42.520 killing process with pid 1749477 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1749477 00:19:42.520 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.520 00:19:42.520 Latency(us) 00:19:42.520 [2024-12-06T10:21:15.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.520 [2024-12-06T10:21:15.458Z] =================================================================================================================== 00:19:42.520 [2024-12-06T10:21:15.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.520 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1749477 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.778 rmmod nvme_tcp 00:19:42.778 rmmod nvme_fabrics 00:19:42.778 rmmod nvme_keyring 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1749345 ']' 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1749345 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1749345 ']' 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1749345 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749345 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749345' 00:19:42.778 killing process with pid 1749345 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1749345 00:19:42.778 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1749345 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.037 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.571 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:45.571 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.kyS 00:19:45.571 00:19:45.571 real 0m21.010s 00:19:45.571 user 0m21.271s 00:19:45.571 sys 0m10.266s 00:19:45.571 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.571 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:45.571 ************************************ 00:19:45.571 END TEST nvmf_fips 00:19:45.571 ************************************ 00:19:45.571 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:45.571 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.571 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.571 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.571 ************************************ 00:19:45.571 START TEST nvmf_control_msg_list 00:19:45.571 ************************************ 00:19:45.571 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:45.571 * Looking for test storage... 00:19:45.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:45.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.571 --rc genhtml_branch_coverage=1 00:19:45.571 --rc genhtml_function_coverage=1 00:19:45.571 --rc genhtml_legend=1 00:19:45.571 --rc geninfo_all_blocks=1 00:19:45.571 --rc geninfo_unexecuted_blocks=1 00:19:45.571 00:19:45.571 ' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:45.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.571 --rc genhtml_branch_coverage=1 00:19:45.571 --rc genhtml_function_coverage=1 00:19:45.571 --rc genhtml_legend=1 00:19:45.571 --rc geninfo_all_blocks=1 00:19:45.571 --rc geninfo_unexecuted_blocks=1 00:19:45.571 00:19:45.571 ' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:45.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.571 --rc genhtml_branch_coverage=1 00:19:45.571 --rc genhtml_function_coverage=1 00:19:45.571 --rc genhtml_legend=1 00:19:45.571 --rc geninfo_all_blocks=1 00:19:45.571 --rc geninfo_unexecuted_blocks=1 00:19:45.571 00:19:45.571 ' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:45.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.571 --rc genhtml_branch_coverage=1 00:19:45.571 --rc genhtml_function_coverage=1 00:19:45.571 --rc genhtml_legend=1 00:19:45.571 --rc geninfo_all_blocks=1 00:19:45.571 --rc geninfo_unexecuted_blocks=1 00:19:45.571 00:19:45.571 ' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.571 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.572 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:52.137 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:52.137 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.137 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:52.138 Found net devices under 0000:af:00.0: cvl_0_0 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:52.138 Found net devices under 0000:af:00.1: cvl_0_1 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.138 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:52.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:19:52.138 00:19:52.138 --- 10.0.0.2 ping statistics --- 00:19:52.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.138 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:19:52.138 00:19:52.138 --- 10.0.0.1 ping statistics --- 00:19:52.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.138 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1755494 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1755494 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1755494 ']' 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.138 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.138 [2024-12-06 11:21:24.226292] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:19:52.138 [2024-12-06 11:21:24.226335] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.138 [2024-12-06 11:21:24.302661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.138 [2024-12-06 11:21:24.340250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.138 [2024-12-06 11:21:24.340285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.138 [2024-12-06 11:21:24.340291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.138 [2024-12-06 11:21:24.340296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.138 [2024-12-06 11:21:24.340301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.138 [2024-12-06 11:21:24.340862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.138 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.138 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:52.138 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.138 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.138 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.138 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.138 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:52.138 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:52.138 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:52.139 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.139 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.398 [2024-12-06 11:21:25.076718] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.398 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.399 Malloc0 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.399 [2024-12-06 11:21:25.116820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1755713 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1755714 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1755715 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1755713 00:19:52.399 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:52.399 [2024-12-06 11:21:25.205465] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:52.399 [2024-12-06 11:21:25.205641] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:52.399 [2024-12-06 11:21:25.205790] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:53.787 Initializing NVMe Controllers 00:19:53.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:53.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:53.787 Initialization complete. Launching workers. 00:19:53.787 ======================================================== 00:19:53.787 Latency(us) 00:19:53.787 Device Information : IOPS MiB/s Average min max 00:19:53.787 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 7892.00 30.83 126.42 109.86 319.23 00:19:53.787 ======================================================== 00:19:53.787 Total : 7892.00 30.83 126.42 109.86 319.23 00:19:53.787 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1755714 00:19:53.787 Initializing NVMe Controllers 00:19:53.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:53.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:53.787 Initialization complete. Launching workers. 00:19:53.787 ======================================================== 00:19:53.787 Latency(us) 00:19:53.787 Device Information : IOPS MiB/s Average min max 00:19:53.787 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40892.98 40691.92 41019.99 00:19:53.787 ======================================================== 00:19:53.787 Total : 25.00 0.10 40892.98 40691.92 41019.99 00:19:53.787 00:19:53.787 Initializing NVMe Controllers 00:19:53.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:53.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:53.787 Initialization complete. Launching workers. 00:19:53.787 ======================================================== 00:19:53.787 Latency(us) 00:19:53.787 Device Information : IOPS MiB/s Average min max 00:19:53.787 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40888.42 40640.98 41118.40 00:19:53.787 ======================================================== 00:19:53.787 Total : 25.00 0.10 40888.42 40640.98 41118.40 00:19:53.787 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1755715 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:53.787 rmmod nvme_tcp 00:19:53.787 rmmod nvme_fabrics 00:19:53.787 rmmod nvme_keyring 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1755494 ']' 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1755494 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1755494 ']' 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1755494 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1755494 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1755494' 00:19:53.787 killing process with pid 1755494 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1755494 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1755494 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.787 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.325 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:56.325 00:19:56.325 real 0m10.706s 00:19:56.325 user 0m7.316s 00:19:56.325 sys 0m5.460s 00:19:56.325 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.325 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:56.325 ************************************ 00:19:56.325 END TEST nvmf_control_msg_list 00:19:56.325 ************************************ 00:19:56.325 11:21:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:56.325 11:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:56.325 11:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.325 11:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:56.325 ************************************ 00:19:56.325 START TEST nvmf_wait_for_buf 00:19:56.325 ************************************ 00:19:56.325 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:56.325 * Looking for test storage... 00:19:56.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:56.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.326 --rc genhtml_branch_coverage=1 00:19:56.326 --rc genhtml_function_coverage=1 00:19:56.326 --rc genhtml_legend=1 00:19:56.326 --rc geninfo_all_blocks=1 00:19:56.326 --rc geninfo_unexecuted_blocks=1 00:19:56.326 00:19:56.326 ' 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:56.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.326 --rc genhtml_branch_coverage=1 00:19:56.326 --rc genhtml_function_coverage=1 00:19:56.326 --rc genhtml_legend=1 00:19:56.326 --rc geninfo_all_blocks=1 00:19:56.326 --rc geninfo_unexecuted_blocks=1 00:19:56.326 00:19:56.326 ' 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:56.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.326 --rc genhtml_branch_coverage=1 00:19:56.326 --rc genhtml_function_coverage=1 00:19:56.326 --rc genhtml_legend=1 00:19:56.326 --rc geninfo_all_blocks=1 00:19:56.326 --rc geninfo_unexecuted_blocks=1 00:19:56.326 00:19:56.326 ' 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:56.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.326 --rc genhtml_branch_coverage=1 00:19:56.326 --rc genhtml_function_coverage=1 00:19:56.326 --rc genhtml_legend=1 00:19:56.326 --rc geninfo_all_blocks=1 00:19:56.326 --rc geninfo_unexecuted_blocks=1 00:19:56.326 00:19:56.326 ' 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:56.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:56.326 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:56.327 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:02.897 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:02.897 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:02.897 Found net devices under 0000:af:00.0: cvl_0_0 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:02.897 Found net devices under 0000:af:00.1: cvl_0_1 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.897 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:02.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:20:02.898 00:20:02.898 --- 10.0.0.2 ping statistics --- 00:20:02.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.898 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:20:02.898 00:20:02.898 --- 10.0.0.1 ping statistics --- 00:20:02.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.898 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1759591 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1759591 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1759591 ']' 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.898 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 [2024-12-06 11:21:34.981798] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:20:02.898 [2024-12-06 11:21:34.981847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.898 [2024-12-06 11:21:35.058727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.898 [2024-12-06 11:21:35.096823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.898 [2024-12-06 11:21:35.096856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.898 [2024-12-06 11:21:35.096863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.898 [2024-12-06 11:21:35.096869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.898 [2024-12-06 11:21:35.096874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.898 [2024-12-06 11:21:35.097426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 Malloc0 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 [2024-12-06 11:21:35.262546] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.898 [2024-12-06 11:21:35.290730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.898 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:02.898 [2024-12-06 11:21:35.372996] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:03.834 Initializing NVMe Controllers 00:20:03.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:03.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:03.834 Initialization complete. Launching workers. 00:20:03.834 ======================================================== 00:20:03.834 Latency(us) 00:20:03.834 Device Information : IOPS MiB/s Average min max 00:20:03.834 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 49.00 6.12 84143.13 31895.29 191533.18 00:20:03.834 ======================================================== 00:20:03.834 Total : 49.00 6.12 84143.13 31895.29 191533.18 00:20:03.834 00:20:03.834 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:03.834 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:03.834 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.834 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:03.834 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=758 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 758 -eq 0 ]] 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.093 rmmod nvme_tcp 00:20:04.093 rmmod nvme_fabrics 00:20:04.093 rmmod nvme_keyring 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1759591 ']' 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1759591 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1759591 ']' 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1759591 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1759591 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1759591' 00:20:04.093 killing process with pid 1759591 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1759591 00:20:04.093 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1759591 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.352 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.256 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:06.256 00:20:06.256 real 0m10.372s 00:20:06.256 user 0m3.893s 00:20:06.256 sys 0m4.909s 00:20:06.256 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.256 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.256 ************************************ 00:20:06.256 END TEST nvmf_wait_for_buf 00:20:06.256 ************************************ 00:20:06.256 11:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:06.256 11:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:06.256 11:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:06.256 11:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:06.256 11:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:06.256 11:21:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:12.822 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:12.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:12.822 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:12.823 Found net devices under 0000:af:00.0: cvl_0_0 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:12.823 Found net devices under 0000:af:00.1: cvl_0_1 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.823 ************************************ 00:20:12.823 START TEST nvmf_perf_adq 00:20:12.823 ************************************ 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:12.823 * Looking for test storage... 00:20:12.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:12.823 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:12.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.823 --rc genhtml_branch_coverage=1 00:20:12.823 --rc genhtml_function_coverage=1 00:20:12.823 --rc genhtml_legend=1 00:20:12.823 --rc geninfo_all_blocks=1 00:20:12.823 --rc geninfo_unexecuted_blocks=1 00:20:12.823 00:20:12.823 ' 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:12.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.823 --rc genhtml_branch_coverage=1 00:20:12.823 --rc genhtml_function_coverage=1 00:20:12.823 --rc genhtml_legend=1 00:20:12.823 --rc geninfo_all_blocks=1 00:20:12.823 --rc geninfo_unexecuted_blocks=1 00:20:12.823 00:20:12.823 ' 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:12.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.823 --rc genhtml_branch_coverage=1 00:20:12.823 --rc genhtml_function_coverage=1 00:20:12.823 --rc genhtml_legend=1 00:20:12.823 --rc geninfo_all_blocks=1 00:20:12.823 --rc geninfo_unexecuted_blocks=1 00:20:12.823 00:20:12.823 ' 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:12.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.823 --rc genhtml_branch_coverage=1 00:20:12.823 --rc genhtml_function_coverage=1 00:20:12.823 --rc genhtml_legend=1 00:20:12.823 --rc geninfo_all_blocks=1 00:20:12.823 --rc geninfo_unexecuted_blocks=1 00:20:12.823 00:20:12.823 ' 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.823 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:12.824 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:18.099 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.099 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:18.100 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:18.100 Found net devices under 0000:af:00.0: cvl_0_0 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:18.100 Found net devices under 0000:af:00.1: cvl_0_1 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:18.100 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:19.504 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:21.536 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:26.805 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:26.805 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:26.805 Found net devices under 0000:af:00.0: cvl_0_0 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:26.805 Found net devices under 0000:af:00.1: cvl_0_1 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.805 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.805 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.805 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.805 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:26.805 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.805 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.805 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.805 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:26.805 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:26.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:20:26.805 00:20:26.805 --- 10.0.0.2 ping statistics --- 00:20:26.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.805 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:20:26.805 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:20:26.805 00:20:26.806 --- 10.0.0.1 ping statistics --- 00:20:26.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.806 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1768352 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1768352 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1768352 ']' 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.806 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.806 [2024-12-06 11:21:59.330462] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:20:26.806 [2024-12-06 11:21:59.330503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.806 [2024-12-06 11:21:59.408096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.806 [2024-12-06 11:21:59.449293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.806 [2024-12-06 11:21:59.449328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.806 [2024-12-06 11:21:59.449335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.806 [2024-12-06 11:21:59.449341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.806 [2024-12-06 11:21:59.449345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.806 [2024-12-06 11:21:59.450905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.806 [2024-12-06 11:21:59.450934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.806 [2024-12-06 11:21:59.451047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.806 [2024-12-06 11:21:59.451048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.371 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.630 [2024-12-06 11:22:00.318180] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.630 Malloc1 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.630 [2024-12-06 11:22:00.375140] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1768431 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:27.630 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:29.528 "tick_rate": 2200000000, 00:20:29.528 "poll_groups": [ 00:20:29.528 { 00:20:29.528 "name": "nvmf_tgt_poll_group_000", 00:20:29.528 "admin_qpairs": 1, 00:20:29.528 "io_qpairs": 1, 00:20:29.528 "current_admin_qpairs": 1, 00:20:29.528 "current_io_qpairs": 1, 00:20:29.528 "pending_bdev_io": 0, 00:20:29.528 "completed_nvme_io": 21650, 00:20:29.528 "transports": [ 00:20:29.528 { 00:20:29.528 "trtype": "TCP" 00:20:29.528 } 00:20:29.528 ] 00:20:29.528 }, 00:20:29.528 { 00:20:29.528 "name": "nvmf_tgt_poll_group_001", 00:20:29.528 "admin_qpairs": 0, 00:20:29.528 "io_qpairs": 1, 00:20:29.528 "current_admin_qpairs": 0, 00:20:29.528 "current_io_qpairs": 1, 00:20:29.528 "pending_bdev_io": 0, 00:20:29.528 "completed_nvme_io": 21270, 00:20:29.528 "transports": [ 00:20:29.528 { 00:20:29.528 "trtype": "TCP" 00:20:29.528 } 00:20:29.528 ] 00:20:29.528 }, 00:20:29.528 { 00:20:29.528 "name": "nvmf_tgt_poll_group_002", 00:20:29.528 "admin_qpairs": 0, 00:20:29.528 "io_qpairs": 1, 00:20:29.528 "current_admin_qpairs": 0, 00:20:29.528 "current_io_qpairs": 1, 00:20:29.528 "pending_bdev_io": 0, 00:20:29.528 "completed_nvme_io": 21602, 00:20:29.528 "transports": [ 00:20:29.528 { 00:20:29.528 "trtype": "TCP" 00:20:29.528 } 00:20:29.528 ] 00:20:29.528 }, 00:20:29.528 { 00:20:29.528 "name": "nvmf_tgt_poll_group_003", 00:20:29.528 "admin_qpairs": 0, 00:20:29.528 "io_qpairs": 1, 00:20:29.528 "current_admin_qpairs": 0, 00:20:29.528 "current_io_qpairs": 1, 00:20:29.528 "pending_bdev_io": 0, 00:20:29.528 "completed_nvme_io": 21524, 00:20:29.528 "transports": [ 00:20:29.528 { 00:20:29.528 "trtype": "TCP" 00:20:29.528 } 00:20:29.528 ] 00:20:29.528 } 00:20:29.528 ] 00:20:29.528 }' 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:29.528 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1768431 00:20:37.639 Initializing NVMe Controllers 00:20:37.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:37.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:37.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:37.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:37.639 Initialization complete. Launching workers. 00:20:37.639 ======================================================== 00:20:37.639 Latency(us) 00:20:37.639 Device Information : IOPS MiB/s Average min max 00:20:37.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11451.90 44.73 5589.64 2181.01 9448.63 00:20:37.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11317.70 44.21 5656.03 1952.97 9441.24 00:20:37.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11469.60 44.80 5580.51 1823.26 9497.40 00:20:37.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11543.00 45.09 5543.47 1743.26 10230.53 00:20:37.639 ======================================================== 00:20:37.639 Total : 45782.20 178.84 5592.12 1743.26 10230.53 00:20:37.639 00:20:37.639 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:37.639 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.639 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:37.639 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.639 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:37.639 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.639 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.639 rmmod nvme_tcp 00:20:37.639 rmmod nvme_fabrics 00:20:37.898 rmmod nvme_keyring 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1768352 ']' 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1768352 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1768352 ']' 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1768352 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768352 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768352' 00:20:37.898 killing process with pid 1768352 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1768352 00:20:37.898 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1768352 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.157 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.060 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.060 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:40.060 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:40.060 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:41.439 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:43.977 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:49.253 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:49.254 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:49.254 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:49.254 Found net devices under 0000:af:00.0: cvl_0_0 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:49.254 Found net devices under 0000:af:00.1: cvl_0_1 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:49.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:20:49.254 00:20:49.254 --- 10.0.0.2 ping statistics --- 00:20:49.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.254 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:20:49.254 00:20:49.254 --- 10.0.0.1 ping statistics --- 00:20:49.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.254 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:49.254 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:49.255 net.core.busy_poll = 1 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:49.255 net.core.busy_read = 1 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1772720 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1772720 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1772720 ']' 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.255 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.255 [2024-12-06 11:22:21.995472] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:20:49.255 [2024-12-06 11:22:21.995515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.255 [2024-12-06 11:22:22.071702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.255 [2024-12-06 11:22:22.110127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.255 [2024-12-06 11:22:22.110165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.255 [2024-12-06 11:22:22.110171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.255 [2024-12-06 11:22:22.110177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.255 [2024-12-06 11:22:22.110181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.255 [2024-12-06 11:22:22.111693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.255 [2024-12-06 11:22:22.111808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.255 [2024-12-06 11:22:22.111920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.255 [2024-12-06 11:22:22.111920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.191 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.192 [2024-12-06 11:22:22.974821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.192 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.192 Malloc1 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.192 [2024-12-06 11:22:23.043804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1772819 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:50.192 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:52.722 "tick_rate": 2200000000, 00:20:52.722 "poll_groups": [ 00:20:52.722 { 00:20:52.722 "name": "nvmf_tgt_poll_group_000", 00:20:52.722 "admin_qpairs": 1, 00:20:52.722 "io_qpairs": 4, 00:20:52.722 "current_admin_qpairs": 1, 00:20:52.722 "current_io_qpairs": 4, 00:20:52.722 "pending_bdev_io": 0, 00:20:52.722 "completed_nvme_io": 46857, 00:20:52.722 "transports": [ 00:20:52.722 { 00:20:52.722 "trtype": "TCP" 00:20:52.722 } 00:20:52.722 ] 00:20:52.722 }, 00:20:52.722 { 00:20:52.722 "name": "nvmf_tgt_poll_group_001", 00:20:52.722 "admin_qpairs": 0, 00:20:52.722 "io_qpairs": 0, 00:20:52.722 "current_admin_qpairs": 0, 00:20:52.722 "current_io_qpairs": 0, 00:20:52.722 "pending_bdev_io": 0, 00:20:52.722 "completed_nvme_io": 0, 00:20:52.722 "transports": [ 00:20:52.722 { 00:20:52.722 "trtype": "TCP" 00:20:52.722 } 00:20:52.722 ] 00:20:52.722 }, 00:20:52.722 { 00:20:52.722 "name": "nvmf_tgt_poll_group_002", 00:20:52.722 "admin_qpairs": 0, 00:20:52.722 "io_qpairs": 0, 00:20:52.722 "current_admin_qpairs": 0, 00:20:52.722 "current_io_qpairs": 0, 00:20:52.722 "pending_bdev_io": 0, 00:20:52.722 "completed_nvme_io": 0, 00:20:52.722 "transports": [ 00:20:52.722 { 00:20:52.722 "trtype": "TCP" 00:20:52.722 } 00:20:52.722 ] 00:20:52.722 }, 00:20:52.722 { 00:20:52.722 "name": "nvmf_tgt_poll_group_003", 00:20:52.722 "admin_qpairs": 0, 00:20:52.722 "io_qpairs": 0, 00:20:52.722 "current_admin_qpairs": 0, 00:20:52.722 "current_io_qpairs": 0, 00:20:52.722 "pending_bdev_io": 0, 00:20:52.722 "completed_nvme_io": 0, 00:20:52.722 "transports": [ 00:20:52.722 { 00:20:52.722 "trtype": "TCP" 00:20:52.722 } 00:20:52.722 ] 00:20:52.722 } 00:20:52.722 ] 00:20:52.722 }' 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:20:52.722 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1772819 00:21:00.862 Initializing NVMe Controllers 00:21:00.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:00.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:00.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:00.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:00.862 Initialization complete. Launching workers. 00:21:00.862 ======================================================== 00:21:00.862 Latency(us) 00:21:00.862 Device Information : IOPS MiB/s Average min max 00:21:00.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6266.40 24.48 10211.94 1154.00 54906.44 00:21:00.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5953.40 23.26 10749.30 1359.22 53872.24 00:21:00.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6793.20 26.54 9458.84 1218.11 54364.97 00:21:00.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6158.40 24.06 10416.30 1353.32 55032.03 00:21:00.862 ======================================================== 00:21:00.862 Total : 25171.39 98.33 10185.79 1154.00 55032.03 00:21:00.862 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:00.862 rmmod nvme_tcp 00:21:00.862 rmmod nvme_fabrics 00:21:00.862 rmmod nvme_keyring 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1772720 ']' 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1772720 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1772720 ']' 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1772720 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1772720 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1772720' 00:21:00.862 killing process with pid 1772720 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1772720 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1772720 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.862 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:04.160 00:21:04.160 real 0m51.766s 00:21:04.160 user 2m49.738s 00:21:04.160 sys 0m9.997s 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.160 ************************************ 00:21:04.160 END TEST nvmf_perf_adq 00:21:04.160 ************************************ 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.160 ************************************ 00:21:04.160 START TEST nvmf_shutdown 00:21:04.160 ************************************ 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:04.160 * Looking for test storage... 00:21:04.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:04.160 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:04.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.161 --rc genhtml_branch_coverage=1 00:21:04.161 --rc genhtml_function_coverage=1 00:21:04.161 --rc genhtml_legend=1 00:21:04.161 --rc geninfo_all_blocks=1 00:21:04.161 --rc geninfo_unexecuted_blocks=1 00:21:04.161 00:21:04.161 ' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:04.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.161 --rc genhtml_branch_coverage=1 00:21:04.161 --rc genhtml_function_coverage=1 00:21:04.161 --rc genhtml_legend=1 00:21:04.161 --rc geninfo_all_blocks=1 00:21:04.161 --rc geninfo_unexecuted_blocks=1 00:21:04.161 00:21:04.161 ' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:04.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.161 --rc genhtml_branch_coverage=1 00:21:04.161 --rc genhtml_function_coverage=1 00:21:04.161 --rc genhtml_legend=1 00:21:04.161 --rc geninfo_all_blocks=1 00:21:04.161 --rc geninfo_unexecuted_blocks=1 00:21:04.161 00:21:04.161 ' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:04.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.161 --rc genhtml_branch_coverage=1 00:21:04.161 --rc genhtml_function_coverage=1 00:21:04.161 --rc genhtml_legend=1 00:21:04.161 --rc geninfo_all_blocks=1 00:21:04.161 --rc geninfo_unexecuted_blocks=1 00:21:04.161 00:21:04.161 ' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:04.161 ************************************ 00:21:04.161 START TEST nvmf_shutdown_tc1 00:21:04.161 ************************************ 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.161 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:10.738 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:10.738 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:10.738 Found net devices under 0000:af:00.0: cvl_0_0 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:10.738 Found net devices under 0000:af:00.1: cvl_0_1 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:21:10.738 00:21:10.738 --- 10.0.0.2 ping statistics --- 00:21:10.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.738 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:10.738 00:21:10.738 --- 10.0.0.1 ping statistics --- 00:21:10.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.738 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:10.738 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1778680 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1778680 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1778680 ']' 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.739 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.739 [2024-12-06 11:22:43.037884] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:10.739 [2024-12-06 11:22:43.037924] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.739 [2024-12-06 11:22:43.095183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.739 [2024-12-06 11:22:43.134825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.739 [2024-12-06 11:22:43.134861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.739 [2024-12-06 11:22:43.134868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.739 [2024-12-06 11:22:43.134874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.739 [2024-12-06 11:22:43.134878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.739 [2024-12-06 11:22:43.136485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.739 [2024-12-06 11:22:43.136595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.739 [2024-12-06 11:22:43.138074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:10.739 [2024-12-06 11:22:43.138077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.739 [2024-12-06 11:22:43.282041] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.739 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.739 Malloc1 00:21:10.739 [2024-12-06 11:22:43.406053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.739 Malloc2 00:21:10.739 Malloc3 00:21:10.739 Malloc4 00:21:10.739 Malloc5 00:21:10.739 Malloc6 00:21:10.739 Malloc7 00:21:10.997 Malloc8 00:21:10.997 Malloc9 00:21:10.997 Malloc10 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1778744 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1778744 /var/tmp/bdevperf.sock 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1778744 ']' 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:10.997 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 [2024-12-06 11:22:43.874632] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:10.998 [2024-12-06 11:22:43.874673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.998 { 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme$subsystem", 00:21:10.998 "trtype": "$TEST_TRANSPORT", 00:21:10.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.998 "adrfam": "ipv4", 00:21:10.998 "trsvcid": "$NVMF_PORT", 00:21:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.998 "hdgst": ${hdgst:-false}, 00:21:10.998 "ddgst": ${ddgst:-false} 00:21:10.998 }, 00:21:10.998 "method": "bdev_nvme_attach_controller" 00:21:10.998 } 00:21:10.998 EOF 00:21:10.998 )") 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:10.998 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:10.998 "params": { 00:21:10.998 "name": "Nvme1", 00:21:10.998 "trtype": "tcp", 00:21:10.998 "traddr": "10.0.0.2", 00:21:10.998 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 },{ 00:21:10.999 "params": { 00:21:10.999 "name": "Nvme2", 00:21:10.999 "trtype": "tcp", 00:21:10.999 "traddr": "10.0.0.2", 00:21:10.999 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 },{ 00:21:10.999 "params": { 00:21:10.999 "name": "Nvme3", 00:21:10.999 "trtype": "tcp", 00:21:10.999 "traddr": "10.0.0.2", 00:21:10.999 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 },{ 00:21:10.999 "params": { 00:21:10.999 "name": "Nvme4", 00:21:10.999 "trtype": "tcp", 00:21:10.999 "traddr": "10.0.0.2", 00:21:10.999 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 },{ 00:21:10.999 "params": { 00:21:10.999 "name": "Nvme5", 00:21:10.999 "trtype": "tcp", 00:21:10.999 "traddr": "10.0.0.2", 00:21:10.999 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 },{ 00:21:10.999 "params": { 00:21:10.999 "name": "Nvme6", 00:21:10.999 "trtype": "tcp", 00:21:10.999 "traddr": "10.0.0.2", 00:21:10.999 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 },{ 00:21:10.999 "params": { 00:21:10.999 "name": "Nvme7", 00:21:10.999 "trtype": "tcp", 00:21:10.999 "traddr": "10.0.0.2", 00:21:10.999 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 },{ 00:21:10.999 "params": { 00:21:10.999 "name": "Nvme8", 00:21:10.999 "trtype": "tcp", 00:21:10.999 "traddr": "10.0.0.2", 00:21:10.999 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 },{ 00:21:10.999 "params": { 00:21:10.999 "name": "Nvme9", 00:21:10.999 "trtype": "tcp", 00:21:10.999 "traddr": "10.0.0.2", 00:21:10.999 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 },{ 00:21:10.999 "params": { 00:21:10.999 "name": "Nvme10", 00:21:10.999 "trtype": "tcp", 00:21:10.999 "traddr": "10.0.0.2", 00:21:10.999 "adrfam": "ipv4", 00:21:10.999 "trsvcid": "4420", 00:21:10.999 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:10.999 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:10.999 "hdgst": false, 00:21:10.999 "ddgst": false 00:21:10.999 }, 00:21:10.999 "method": "bdev_nvme_attach_controller" 00:21:10.999 }' 00:21:11.257 [2024-12-06 11:22:43.947871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.257 [2024-12-06 11:22:43.985462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.633 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.633 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:12.633 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:12.633 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.633 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.633 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.633 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1778744 00:21:12.633 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:12.633 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:13.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1778744 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1778680 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.567 { 00:21:13.567 "params": { 00:21:13.567 "name": "Nvme$subsystem", 00:21:13.567 "trtype": "$TEST_TRANSPORT", 00:21:13.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.567 "adrfam": "ipv4", 00:21:13.567 "trsvcid": "$NVMF_PORT", 00:21:13.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.567 "hdgst": ${hdgst:-false}, 00:21:13.567 "ddgst": ${ddgst:-false} 00:21:13.567 }, 00:21:13.567 "method": "bdev_nvme_attach_controller" 00:21:13.567 } 00:21:13.567 EOF 00:21:13.567 )") 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.567 { 00:21:13.567 "params": { 00:21:13.567 "name": "Nvme$subsystem", 00:21:13.567 "trtype": "$TEST_TRANSPORT", 00:21:13.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.567 "adrfam": "ipv4", 00:21:13.567 "trsvcid": "$NVMF_PORT", 00:21:13.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.567 "hdgst": ${hdgst:-false}, 00:21:13.567 "ddgst": ${ddgst:-false} 00:21:13.567 }, 00:21:13.567 "method": "bdev_nvme_attach_controller" 00:21:13.567 } 00:21:13.567 EOF 00:21:13.567 )") 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.567 { 00:21:13.567 "params": { 00:21:13.567 "name": "Nvme$subsystem", 00:21:13.567 "trtype": "$TEST_TRANSPORT", 00:21:13.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.567 "adrfam": "ipv4", 00:21:13.567 "trsvcid": "$NVMF_PORT", 00:21:13.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.567 "hdgst": ${hdgst:-false}, 00:21:13.567 "ddgst": ${ddgst:-false} 00:21:13.567 }, 00:21:13.567 "method": "bdev_nvme_attach_controller" 00:21:13.567 } 00:21:13.567 EOF 00:21:13.567 )") 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.567 { 00:21:13.567 "params": { 00:21:13.567 "name": "Nvme$subsystem", 00:21:13.567 "trtype": "$TEST_TRANSPORT", 00:21:13.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.567 "adrfam": "ipv4", 00:21:13.567 "trsvcid": "$NVMF_PORT", 00:21:13.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.567 "hdgst": ${hdgst:-false}, 00:21:13.567 "ddgst": ${ddgst:-false} 00:21:13.567 }, 00:21:13.567 "method": "bdev_nvme_attach_controller" 00:21:13.567 } 00:21:13.567 EOF 00:21:13.567 )") 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.567 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.567 { 00:21:13.567 "params": { 00:21:13.567 "name": "Nvme$subsystem", 00:21:13.567 "trtype": "$TEST_TRANSPORT", 00:21:13.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.567 "adrfam": "ipv4", 00:21:13.567 "trsvcid": "$NVMF_PORT", 00:21:13.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.567 "hdgst": ${hdgst:-false}, 00:21:13.568 "ddgst": ${ddgst:-false} 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 } 00:21:13.568 EOF 00:21:13.568 )") 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.568 { 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme$subsystem", 00:21:13.568 "trtype": "$TEST_TRANSPORT", 00:21:13.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "$NVMF_PORT", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.568 "hdgst": ${hdgst:-false}, 00:21:13.568 "ddgst": ${ddgst:-false} 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 } 00:21:13.568 EOF 00:21:13.568 )") 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.568 { 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme$subsystem", 00:21:13.568 "trtype": "$TEST_TRANSPORT", 00:21:13.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "$NVMF_PORT", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.568 "hdgst": ${hdgst:-false}, 00:21:13.568 "ddgst": ${ddgst:-false} 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 } 00:21:13.568 EOF 00:21:13.568 )") 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.568 [2024-12-06 11:22:46.354382] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:13.568 [2024-12-06 11:22:46.354429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779288 ] 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.568 { 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme$subsystem", 00:21:13.568 "trtype": "$TEST_TRANSPORT", 00:21:13.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "$NVMF_PORT", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.568 "hdgst": ${hdgst:-false}, 00:21:13.568 "ddgst": ${ddgst:-false} 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 } 00:21:13.568 EOF 00:21:13.568 )") 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.568 { 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme$subsystem", 00:21:13.568 "trtype": "$TEST_TRANSPORT", 00:21:13.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "$NVMF_PORT", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.568 "hdgst": ${hdgst:-false}, 00:21:13.568 "ddgst": ${ddgst:-false} 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 } 00:21:13.568 EOF 00:21:13.568 )") 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:13.568 { 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme$subsystem", 00:21:13.568 "trtype": "$TEST_TRANSPORT", 00:21:13.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "$NVMF_PORT", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.568 "hdgst": ${hdgst:-false}, 00:21:13.568 "ddgst": ${ddgst:-false} 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 } 00:21:13.568 EOF 00:21:13.568 )") 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:13.568 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme1", 00:21:13.568 "trtype": "tcp", 00:21:13.568 "traddr": "10.0.0.2", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "4420", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.568 "hdgst": false, 00:21:13.568 "ddgst": false 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 },{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme2", 00:21:13.568 "trtype": "tcp", 00:21:13.568 "traddr": "10.0.0.2", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "4420", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:13.568 "hdgst": false, 00:21:13.568 "ddgst": false 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 },{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme3", 00:21:13.568 "trtype": "tcp", 00:21:13.568 "traddr": "10.0.0.2", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "4420", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:13.568 "hdgst": false, 00:21:13.568 "ddgst": false 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 },{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme4", 00:21:13.568 "trtype": "tcp", 00:21:13.568 "traddr": "10.0.0.2", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "4420", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:13.568 "hdgst": false, 00:21:13.568 "ddgst": false 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 },{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme5", 00:21:13.568 "trtype": "tcp", 00:21:13.568 "traddr": "10.0.0.2", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "4420", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:13.568 "hdgst": false, 00:21:13.568 "ddgst": false 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 },{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme6", 00:21:13.568 "trtype": "tcp", 00:21:13.568 "traddr": "10.0.0.2", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "4420", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:13.568 "hdgst": false, 00:21:13.568 "ddgst": false 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 },{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme7", 00:21:13.568 "trtype": "tcp", 00:21:13.568 "traddr": "10.0.0.2", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "4420", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:13.568 "hdgst": false, 00:21:13.568 "ddgst": false 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 },{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme8", 00:21:13.568 "trtype": "tcp", 00:21:13.568 "traddr": "10.0.0.2", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "4420", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:13.568 "hdgst": false, 00:21:13.568 "ddgst": false 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 },{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme9", 00:21:13.568 "trtype": "tcp", 00:21:13.568 "traddr": "10.0.0.2", 00:21:13.568 "adrfam": "ipv4", 00:21:13.568 "trsvcid": "4420", 00:21:13.568 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:13.568 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:13.568 "hdgst": false, 00:21:13.568 "ddgst": false 00:21:13.568 }, 00:21:13.568 "method": "bdev_nvme_attach_controller" 00:21:13.568 },{ 00:21:13.568 "params": { 00:21:13.568 "name": "Nvme10", 00:21:13.568 "trtype": "tcp", 00:21:13.569 "traddr": "10.0.0.2", 00:21:13.569 "adrfam": "ipv4", 00:21:13.569 "trsvcid": "4420", 00:21:13.569 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:13.569 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:13.569 "hdgst": false, 00:21:13.569 "ddgst": false 00:21:13.569 }, 00:21:13.569 "method": "bdev_nvme_attach_controller" 00:21:13.569 }' 00:21:13.569 [2024-12-06 11:22:46.429148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.569 [2024-12-06 11:22:46.467452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.472 Running I/O for 1 seconds... 00:21:16.408 2520.00 IOPS, 157.50 MiB/s 00:21:16.409 Latency(us) 00:21:16.409 [2024-12-06T10:22:49.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.409 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme1n1 : 1.10 292.24 18.26 0.00 0.00 217014.74 14060.45 204948.95 00:21:16.409 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme2n1 : 1.11 288.07 18.00 0.00 0.00 217411.96 17277.67 200182.69 00:21:16.409 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme3n1 : 1.12 348.16 21.76 0.00 0.00 177305.13 10664.49 196369.69 00:21:16.409 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme4n1 : 1.12 349.49 21.84 0.00 0.00 173238.42 6076.97 179211.17 00:21:16.409 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme5n1 : 1.11 287.15 17.95 0.00 0.00 209628.72 14894.55 199229.44 00:21:16.409 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme6n1 : 1.11 289.16 18.07 0.00 0.00 205194.71 15490.33 200182.69 00:21:16.409 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme7n1 : 1.10 291.34 18.21 0.00 0.00 200716.94 13345.51 203995.69 00:21:16.409 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme8n1 : 1.10 289.68 18.11 0.00 0.00 199113.63 13762.56 194463.19 00:21:16.409 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme9n1 : 1.14 281.62 17.60 0.00 0.00 199925.02 17754.30 213528.20 00:21:16.409 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.409 Verification LBA range: start 0x0 length 0x400 00:21:16.409 Nvme10n1 : 1.15 333.92 20.87 0.00 0.00 168636.96 3530.01 218294.46 00:21:16.409 [2024-12-06T10:22:49.347Z] =================================================================================================================== 00:21:16.409 [2024-12-06T10:22:49.347Z] Total : 3050.83 190.68 0.00 0.00 195407.90 3530.01 218294.46 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.668 rmmod nvme_tcp 00:21:16.668 rmmod nvme_fabrics 00:21:16.668 rmmod nvme_keyring 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1778680 ']' 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1778680 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1778680 ']' 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1778680 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1778680 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1778680' 00:21:16.668 killing process with pid 1778680 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1778680 00:21:16.668 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1778680 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.236 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.142 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.142 00:21:19.142 real 0m15.022s 00:21:19.142 user 0m32.630s 00:21:19.142 sys 0m5.782s 00:21:19.142 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.142 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:19.142 ************************************ 00:21:19.142 END TEST nvmf_shutdown_tc1 00:21:19.142 ************************************ 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:19.142 ************************************ 00:21:19.142 START TEST nvmf_shutdown_tc2 00:21:19.142 ************************************ 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.142 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:19.143 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:19.143 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:19.143 Found net devices under 0000:af:00.0: cvl_0_0 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.143 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:19.403 Found net devices under 0000:af:00.1: cvl_0_1 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:21:19.403 00:21:19.403 --- 10.0.0.2 ping statistics --- 00:21:19.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.403 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:21:19.403 00:21:19.403 --- 10.0.0.1 ping statistics --- 00:21:19.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.403 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.403 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1780443 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1780443 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1780443 ']' 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.663 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:19.663 [2024-12-06 11:22:52.433042] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:19.663 [2024-12-06 11:22:52.433089] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.663 [2024-12-06 11:22:52.507763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.663 [2024-12-06 11:22:52.544843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.663 [2024-12-06 11:22:52.544879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.663 [2024-12-06 11:22:52.544885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.663 [2024-12-06 11:22:52.544892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.663 [2024-12-06 11:22:52.544896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.663 [2024-12-06 11:22:52.546460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.663 [2024-12-06 11:22:52.546570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.663 [2024-12-06 11:22:52.546660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:19.663 [2024-12-06 11:22:52.546658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:20.601 [2024-12-06 11:22:53.284993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.601 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:20.601 Malloc1 00:21:20.601 [2024-12-06 11:22:53.388295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.601 Malloc2 00:21:20.601 Malloc3 00:21:20.601 Malloc4 00:21:20.601 Malloc5 00:21:20.860 Malloc6 00:21:20.860 Malloc7 00:21:20.860 Malloc8 00:21:20.860 Malloc9 00:21:20.860 Malloc10 00:21:20.860 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.860 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:20.860 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.860 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1780757 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1780757 /var/tmp/bdevperf.sock 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1780757 ']' 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.120 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.120 { 00:21:21.120 "params": { 00:21:21.120 "name": "Nvme$subsystem", 00:21:21.120 "trtype": "$TEST_TRANSPORT", 00:21:21.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.120 "adrfam": "ipv4", 00:21:21.120 "trsvcid": "$NVMF_PORT", 00:21:21.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.120 "hdgst": ${hdgst:-false}, 00:21:21.120 "ddgst": ${ddgst:-false} 00:21:21.120 }, 00:21:21.120 "method": "bdev_nvme_attach_controller" 00:21:21.120 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.121 { 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme$subsystem", 00:21:21.121 "trtype": "$TEST_TRANSPORT", 00:21:21.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "$NVMF_PORT", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.121 "hdgst": ${hdgst:-false}, 00:21:21.121 "ddgst": ${ddgst:-false} 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.121 { 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme$subsystem", 00:21:21.121 "trtype": "$TEST_TRANSPORT", 00:21:21.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "$NVMF_PORT", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.121 "hdgst": ${hdgst:-false}, 00:21:21.121 "ddgst": ${ddgst:-false} 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.121 { 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme$subsystem", 00:21:21.121 "trtype": "$TEST_TRANSPORT", 00:21:21.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "$NVMF_PORT", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.121 "hdgst": ${hdgst:-false}, 00:21:21.121 "ddgst": ${ddgst:-false} 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.121 { 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme$subsystem", 00:21:21.121 "trtype": "$TEST_TRANSPORT", 00:21:21.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "$NVMF_PORT", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.121 "hdgst": ${hdgst:-false}, 00:21:21.121 "ddgst": ${ddgst:-false} 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.121 { 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme$subsystem", 00:21:21.121 "trtype": "$TEST_TRANSPORT", 00:21:21.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "$NVMF_PORT", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.121 "hdgst": ${hdgst:-false}, 00:21:21.121 "ddgst": ${ddgst:-false} 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.121 { 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme$subsystem", 00:21:21.121 "trtype": "$TEST_TRANSPORT", 00:21:21.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "$NVMF_PORT", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.121 "hdgst": ${hdgst:-false}, 00:21:21.121 "ddgst": ${ddgst:-false} 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 [2024-12-06 11:22:53.859079] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:21.121 [2024-12-06 11:22:53.859126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780757 ] 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.121 { 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme$subsystem", 00:21:21.121 "trtype": "$TEST_TRANSPORT", 00:21:21.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "$NVMF_PORT", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.121 "hdgst": ${hdgst:-false}, 00:21:21.121 "ddgst": ${ddgst:-false} 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.121 { 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme$subsystem", 00:21:21.121 "trtype": "$TEST_TRANSPORT", 00:21:21.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "$NVMF_PORT", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.121 "hdgst": ${hdgst:-false}, 00:21:21.121 "ddgst": ${ddgst:-false} 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.121 { 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme$subsystem", 00:21:21.121 "trtype": "$TEST_TRANSPORT", 00:21:21.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "$NVMF_PORT", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.121 "hdgst": ${hdgst:-false}, 00:21:21.121 "ddgst": ${ddgst:-false} 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 } 00:21:21.121 EOF 00:21:21.121 )") 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:21.121 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme1", 00:21:21.121 "trtype": "tcp", 00:21:21.121 "traddr": "10.0.0.2", 00:21:21.121 "adrfam": "ipv4", 00:21:21.121 "trsvcid": "4420", 00:21:21.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.121 "hdgst": false, 00:21:21.121 "ddgst": false 00:21:21.121 }, 00:21:21.121 "method": "bdev_nvme_attach_controller" 00:21:21.121 },{ 00:21:21.121 "params": { 00:21:21.121 "name": "Nvme2", 00:21:21.121 "trtype": "tcp", 00:21:21.121 "traddr": "10.0.0.2", 00:21:21.122 "adrfam": "ipv4", 00:21:21.122 "trsvcid": "4420", 00:21:21.122 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:21.122 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:21.122 "hdgst": false, 00:21:21.122 "ddgst": false 00:21:21.122 }, 00:21:21.122 "method": "bdev_nvme_attach_controller" 00:21:21.122 },{ 00:21:21.122 "params": { 00:21:21.122 "name": "Nvme3", 00:21:21.122 "trtype": "tcp", 00:21:21.122 "traddr": "10.0.0.2", 00:21:21.122 "adrfam": "ipv4", 00:21:21.122 "trsvcid": "4420", 00:21:21.122 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:21.122 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:21.122 "hdgst": false, 00:21:21.122 "ddgst": false 00:21:21.122 }, 00:21:21.122 "method": "bdev_nvme_attach_controller" 00:21:21.122 },{ 00:21:21.122 "params": { 00:21:21.122 "name": "Nvme4", 00:21:21.122 "trtype": "tcp", 00:21:21.122 "traddr": "10.0.0.2", 00:21:21.122 "adrfam": "ipv4", 00:21:21.122 "trsvcid": "4420", 00:21:21.122 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:21.122 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:21.122 "hdgst": false, 00:21:21.122 "ddgst": false 00:21:21.122 }, 00:21:21.122 "method": "bdev_nvme_attach_controller" 00:21:21.122 },{ 00:21:21.122 "params": { 00:21:21.122 "name": "Nvme5", 00:21:21.122 "trtype": "tcp", 00:21:21.122 "traddr": "10.0.0.2", 00:21:21.122 "adrfam": "ipv4", 00:21:21.122 "trsvcid": "4420", 00:21:21.122 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:21.122 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:21.122 "hdgst": false, 00:21:21.122 "ddgst": false 00:21:21.122 }, 00:21:21.122 "method": "bdev_nvme_attach_controller" 00:21:21.122 },{ 00:21:21.122 "params": { 00:21:21.122 "name": "Nvme6", 00:21:21.122 "trtype": "tcp", 00:21:21.122 "traddr": "10.0.0.2", 00:21:21.122 "adrfam": "ipv4", 00:21:21.122 "trsvcid": "4420", 00:21:21.122 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:21.122 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:21.122 "hdgst": false, 00:21:21.122 "ddgst": false 00:21:21.122 }, 00:21:21.122 "method": "bdev_nvme_attach_controller" 00:21:21.122 },{ 00:21:21.122 "params": { 00:21:21.122 "name": "Nvme7", 00:21:21.122 "trtype": "tcp", 00:21:21.122 "traddr": "10.0.0.2", 00:21:21.122 "adrfam": "ipv4", 00:21:21.122 "trsvcid": "4420", 00:21:21.122 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:21.122 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:21.122 "hdgst": false, 00:21:21.122 "ddgst": false 00:21:21.122 }, 00:21:21.122 "method": "bdev_nvme_attach_controller" 00:21:21.122 },{ 00:21:21.122 "params": { 00:21:21.122 "name": "Nvme8", 00:21:21.122 "trtype": "tcp", 00:21:21.122 "traddr": "10.0.0.2", 00:21:21.122 "adrfam": "ipv4", 00:21:21.122 "trsvcid": "4420", 00:21:21.122 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:21.122 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:21.122 "hdgst": false, 00:21:21.122 "ddgst": false 00:21:21.122 }, 00:21:21.122 "method": "bdev_nvme_attach_controller" 00:21:21.122 },{ 00:21:21.122 "params": { 00:21:21.122 "name": "Nvme9", 00:21:21.122 "trtype": "tcp", 00:21:21.122 "traddr": "10.0.0.2", 00:21:21.122 "adrfam": "ipv4", 00:21:21.122 "trsvcid": "4420", 00:21:21.122 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:21.122 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:21.122 "hdgst": false, 00:21:21.122 "ddgst": false 00:21:21.122 }, 00:21:21.122 "method": "bdev_nvme_attach_controller" 00:21:21.122 },{ 00:21:21.122 "params": { 00:21:21.122 "name": "Nvme10", 00:21:21.122 "trtype": "tcp", 00:21:21.122 "traddr": "10.0.0.2", 00:21:21.122 "adrfam": "ipv4", 00:21:21.122 "trsvcid": "4420", 00:21:21.122 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:21.122 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:21.122 "hdgst": false, 00:21:21.122 "ddgst": false 00:21:21.122 }, 00:21:21.122 "method": "bdev_nvme_attach_controller" 00:21:21.122 }' 00:21:21.122 [2024-12-06 11:22:53.935169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.122 [2024-12-06 11:22:53.972787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.028 Running I/O for 10 seconds... 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:23.028 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:23.287 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:23.287 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:23.287 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:23.287 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:23.287 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.287 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1780757 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1780757 ']' 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1780757 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1780757 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1780757' 00:21:23.287 killing process with pid 1780757 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1780757 00:21:23.287 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1780757 00:21:23.287 Received shutdown signal, test time was about 0.645034 seconds 00:21:23.287 00:21:23.287 Latency(us) 00:21:23.287 [2024-12-06T10:22:56.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.287 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme1n1 : 0.59 323.58 20.22 0.00 0.00 194539.52 19422.49 182070.92 00:21:23.287 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme2n1 : 0.61 310.01 19.38 0.00 0.00 197503.71 12451.84 204948.95 00:21:23.287 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme3n1 : 0.59 325.85 20.37 0.00 0.00 183405.38 13405.09 190650.18 00:21:23.287 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme4n1 : 0.61 314.48 19.65 0.00 0.00 186055.84 17515.99 189696.93 00:21:23.287 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme5n1 : 0.61 316.78 19.80 0.00 0.00 180043.25 14537.08 197322.94 00:21:23.287 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme6n1 : 0.64 307.25 19.20 0.00 0.00 169737.94 14060.45 194463.19 00:21:23.287 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme7n1 : 0.60 319.89 19.99 0.00 0.00 168480.89 31218.97 196369.69 00:21:23.287 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme8n1 : 0.60 321.61 20.10 0.00 0.00 162569.15 14894.55 184930.68 00:21:23.287 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme9n1 : 0.59 227.99 14.25 0.00 0.00 219022.07 4825.83 209715.20 00:21:23.287 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.287 Verification LBA range: start 0x0 length 0x400 00:21:23.287 Nvme10n1 : 0.58 218.92 13.68 0.00 0.00 224047.01 24188.74 224967.21 00:21:23.287 [2024-12-06T10:22:56.225Z] =================================================================================================================== 00:21:23.287 [2024-12-06T10:22:56.225Z] Total : 2986.36 186.65 0.00 0.00 186225.77 4825.83 224967.21 00:21:23.547 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1780443 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.483 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.483 rmmod nvme_tcp 00:21:24.483 rmmod nvme_fabrics 00:21:24.742 rmmod nvme_keyring 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1780443 ']' 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1780443 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1780443 ']' 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1780443 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1780443 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1780443' 00:21:24.742 killing process with pid 1780443 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1780443 00:21:24.742 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1780443 00:21:25.001 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.001 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.001 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.002 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:25.002 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:25.002 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.002 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.002 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.002 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.002 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.002 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.002 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.068 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.068 00:21:27.068 real 0m7.881s 00:21:27.068 user 0m23.805s 00:21:27.068 sys 0m1.293s 00:21:27.068 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.068 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.068 ************************************ 00:21:27.068 END TEST nvmf_shutdown_tc2 00:21:27.068 ************************************ 00:21:27.068 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:27.068 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:27.068 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.068 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:27.327 ************************************ 00:21:27.327 START TEST nvmf_shutdown_tc3 00:21:27.327 ************************************ 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.327 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:27.328 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:27.328 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:27.328 Found net devices under 0000:af:00.0: cvl_0_0 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:27.328 Found net devices under 0000:af:00.1: cvl_0_1 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.328 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:21:27.588 00:21:27.588 --- 10.0.0.2 ping statistics --- 00:21:27.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.588 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:21:27.588 00:21:27.588 --- 10.0.0.1 ping statistics --- 00:21:27.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.588 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1781944 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1781944 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1781944 ']' 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.588 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.588 [2024-12-06 11:23:00.398580] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:27.588 [2024-12-06 11:23:00.398624] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.588 [2024-12-06 11:23:00.476067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.588 [2024-12-06 11:23:00.514885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.588 [2024-12-06 11:23:00.514923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.588 [2024-12-06 11:23:00.514929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.588 [2024-12-06 11:23:00.514935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.588 [2024-12-06 11:23:00.514940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.588 [2024-12-06 11:23:00.516536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.588 [2024-12-06 11:23:00.516648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.588 [2024-12-06 11:23:00.516680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.588 [2024-12-06 11:23:00.516681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:28.527 [2024-12-06 11:23:01.246860] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.527 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:28.527 Malloc1 00:21:28.527 [2024-12-06 11:23:01.350169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.527 Malloc2 00:21:28.527 Malloc3 00:21:28.527 Malloc4 00:21:28.787 Malloc5 00:21:28.787 Malloc6 00:21:28.787 Malloc7 00:21:28.787 Malloc8 00:21:28.787 Malloc9 00:21:28.787 Malloc10 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1782255 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1782255 /var/tmp/bdevperf.sock 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1782255 ']' 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.047 { 00:21:29.047 "params": { 00:21:29.047 "name": "Nvme$subsystem", 00:21:29.047 "trtype": "$TEST_TRANSPORT", 00:21:29.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.047 "adrfam": "ipv4", 00:21:29.047 "trsvcid": "$NVMF_PORT", 00:21:29.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.047 "hdgst": ${hdgst:-false}, 00:21:29.047 "ddgst": ${ddgst:-false} 00:21:29.047 }, 00:21:29.047 "method": "bdev_nvme_attach_controller" 00:21:29.047 } 00:21:29.047 EOF 00:21:29.047 )") 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.047 { 00:21:29.047 "params": { 00:21:29.047 "name": "Nvme$subsystem", 00:21:29.047 "trtype": "$TEST_TRANSPORT", 00:21:29.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.047 "adrfam": "ipv4", 00:21:29.047 "trsvcid": "$NVMF_PORT", 00:21:29.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.047 "hdgst": ${hdgst:-false}, 00:21:29.047 "ddgst": ${ddgst:-false} 00:21:29.047 }, 00:21:29.047 "method": "bdev_nvme_attach_controller" 00:21:29.047 } 00:21:29.047 EOF 00:21:29.047 )") 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.047 { 00:21:29.047 "params": { 00:21:29.047 "name": "Nvme$subsystem", 00:21:29.047 "trtype": "$TEST_TRANSPORT", 00:21:29.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.047 "adrfam": "ipv4", 00:21:29.047 "trsvcid": "$NVMF_PORT", 00:21:29.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.047 "hdgst": ${hdgst:-false}, 00:21:29.047 "ddgst": ${ddgst:-false} 00:21:29.047 }, 00:21:29.047 "method": "bdev_nvme_attach_controller" 00:21:29.047 } 00:21:29.047 EOF 00:21:29.047 )") 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.047 { 00:21:29.047 "params": { 00:21:29.047 "name": "Nvme$subsystem", 00:21:29.047 "trtype": "$TEST_TRANSPORT", 00:21:29.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.047 "adrfam": "ipv4", 00:21:29.047 "trsvcid": "$NVMF_PORT", 00:21:29.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.047 "hdgst": ${hdgst:-false}, 00:21:29.047 "ddgst": ${ddgst:-false} 00:21:29.047 }, 00:21:29.047 "method": "bdev_nvme_attach_controller" 00:21:29.047 } 00:21:29.047 EOF 00:21:29.047 )") 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.047 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.047 { 00:21:29.047 "params": { 00:21:29.047 "name": "Nvme$subsystem", 00:21:29.047 "trtype": "$TEST_TRANSPORT", 00:21:29.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.047 "adrfam": "ipv4", 00:21:29.047 "trsvcid": "$NVMF_PORT", 00:21:29.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.047 "hdgst": ${hdgst:-false}, 00:21:29.047 "ddgst": ${ddgst:-false} 00:21:29.047 }, 00:21:29.047 "method": "bdev_nvme_attach_controller" 00:21:29.047 } 00:21:29.047 EOF 00:21:29.047 )") 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.048 { 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme$subsystem", 00:21:29.048 "trtype": "$TEST_TRANSPORT", 00:21:29.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "$NVMF_PORT", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.048 "hdgst": ${hdgst:-false}, 00:21:29.048 "ddgst": ${ddgst:-false} 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 } 00:21:29.048 EOF 00:21:29.048 )") 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.048 { 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme$subsystem", 00:21:29.048 "trtype": "$TEST_TRANSPORT", 00:21:29.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "$NVMF_PORT", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.048 "hdgst": ${hdgst:-false}, 00:21:29.048 "ddgst": ${ddgst:-false} 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 } 00:21:29.048 EOF 00:21:29.048 )") 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.048 [2024-12-06 11:23:01.824900] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:29.048 [2024-12-06 11:23:01.824951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1782255 ] 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.048 { 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme$subsystem", 00:21:29.048 "trtype": "$TEST_TRANSPORT", 00:21:29.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "$NVMF_PORT", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.048 "hdgst": ${hdgst:-false}, 00:21:29.048 "ddgst": ${ddgst:-false} 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 } 00:21:29.048 EOF 00:21:29.048 )") 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.048 { 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme$subsystem", 00:21:29.048 "trtype": "$TEST_TRANSPORT", 00:21:29.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "$NVMF_PORT", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.048 "hdgst": ${hdgst:-false}, 00:21:29.048 "ddgst": ${ddgst:-false} 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 } 00:21:29.048 EOF 00:21:29.048 )") 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.048 { 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme$subsystem", 00:21:29.048 "trtype": "$TEST_TRANSPORT", 00:21:29.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "$NVMF_PORT", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.048 "hdgst": ${hdgst:-false}, 00:21:29.048 "ddgst": ${ddgst:-false} 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 } 00:21:29.048 EOF 00:21:29.048 )") 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:29.048 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme1", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "4420", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.048 "hdgst": false, 00:21:29.048 "ddgst": false 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 },{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme2", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "4420", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.048 "hdgst": false, 00:21:29.048 "ddgst": false 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 },{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme3", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "4420", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.048 "hdgst": false, 00:21:29.048 "ddgst": false 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 },{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme4", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "4420", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.048 "hdgst": false, 00:21:29.048 "ddgst": false 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 },{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme5", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "4420", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.048 "hdgst": false, 00:21:29.048 "ddgst": false 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 },{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme6", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "4420", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.048 "hdgst": false, 00:21:29.048 "ddgst": false 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 },{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme7", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "4420", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.048 "hdgst": false, 00:21:29.048 "ddgst": false 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 },{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme8", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "4420", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.048 "hdgst": false, 00:21:29.048 "ddgst": false 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 },{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme9", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.048 "trsvcid": "4420", 00:21:29.048 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.048 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.048 "hdgst": false, 00:21:29.048 "ddgst": false 00:21:29.048 }, 00:21:29.048 "method": "bdev_nvme_attach_controller" 00:21:29.048 },{ 00:21:29.048 "params": { 00:21:29.048 "name": "Nvme10", 00:21:29.048 "trtype": "tcp", 00:21:29.048 "traddr": "10.0.0.2", 00:21:29.048 "adrfam": "ipv4", 00:21:29.049 "trsvcid": "4420", 00:21:29.049 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.049 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.049 "hdgst": false, 00:21:29.049 "ddgst": false 00:21:29.049 }, 00:21:29.049 "method": "bdev_nvme_attach_controller" 00:21:29.049 }' 00:21:29.049 [2024-12-06 11:23:01.898421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.049 [2024-12-06 11:23:01.935988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.428 Running I/O for 10 seconds... 00:21:30.428 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.428 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:30.428 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:30.428 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.428 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.686 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.686 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:30.687 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:30.946 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1781944 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1781944 ']' 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1781944 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.205 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1781944 00:21:31.481 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.481 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.481 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1781944' 00:21:31.481 killing process with pid 1781944 00:21:31.481 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1781944 00:21:31.481 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1781944 00:21:31.481 [2024-12-06 11:23:04.164008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.481 [2024-12-06 11:23:04.164376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.164443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecce90 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.166167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.482 [2024-12-06 11:23:04.166200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.482 [2024-12-06 11:23:04.166210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.482 [2024-12-06 11:23:04.166217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.482 [2024-12-06 11:23:04.166224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.482 [2024-12-06 11:23:04.166230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.482 [2024-12-06 11:23:04.166237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.482 [2024-12-06 11:23:04.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.482 [2024-12-06 11:23:04.166250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22412e0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.169998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.482 [2024-12-06 11:23:04.170069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.170088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf8f0 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.171675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd360 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.172996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.173002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.173008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.173013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.173019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.483 [2024-12-06 11:23:04.173025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.173263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd830 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.174490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece1f0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.484 [2024-12-06 11:23:04.175387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.175482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece6e0 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.485 [2024-12-06 11:23:04.176667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecea60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2235320 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2665610 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269ec20 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.176946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.176992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.176999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cf60 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.177026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2235120 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.177112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155610 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.177197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.486 [2024-12-06 11:23:04.177247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240440 is same with the state(6) to be set 00:21:31.486 [2024-12-06 11:23:04.177270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22412e0 (9): Bad file descriptor 00:21:31.486 [2024-12-06 11:23:04.177450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-12-06 11:23:04.177469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-12-06 11:23:04.177490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.486 [2024-12-06 11:23:04.177507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.486 [2024-12-06 11:23:04.177515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:31.487 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:12he state(6) to be set 00:21:31.487 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:31.487 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:12he state(6) to be set 00:21:31.487 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:1he state(6) to be set 00:21:31.487 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:1[2024-12-06 11:23:04.177702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 he state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:31.487 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:1[2024-12-06 11:23:04.177738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 he state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:1[2024-12-06 11:23:04.177772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 he state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:31.487 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:1he state(6) to be set 00:21:31.487 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:31.487 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.487 [2024-12-06 11:23:04.177843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.487 [2024-12-06 11:23:04.177845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.487 [2024-12-06 11:23:04.177850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.177856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.177863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.177869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:1[2024-12-06 11:23:04.177877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 he state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:31.488 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.177897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.177904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.177911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.177917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 11:23:04.177923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 he state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with t[2024-12-06 11:23:04.177933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:1he state(6) to be set 00:21:31.488 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.177945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 11:23:04.177945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 he state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.177961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.177967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.177974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.177982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.177988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with the state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.177994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 11:23:04.177995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecef30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 he state(6) to be set 00:21:31.488 [2024-12-06 11:23:04.178004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.488 [2024-12-06 11:23:04.178303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.488 [2024-12-06 11:23:04.178309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.489 [2024-12-06 11:23:04.178459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.489 [2024-12-06 11:23:04.178539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.489 [2024-12-06 11:23:04.178884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.490 [2024-12-06 11:23:04.178889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.490 [2024-12-06 11:23:04.178895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf400 is same with the state(6) to be set 00:21:31.490 [2024-12-06 11:23:04.179371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.490 [2024-12-06 11:23:04.179852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.490 [2024-12-06 11:23:04.179860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.179992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.179998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.180007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.193984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.193994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.491 [2024-12-06 11:23:04.194204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2235320 (9): Bad file descriptor 00:21:31.491 [2024-12-06 11:23:04.194624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2665610 (9): Bad file descriptor 00:21:31.491 [2024-12-06 11:23:04.194656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.491 [2024-12-06 11:23:04.194668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.491 [2024-12-06 11:23:04.194688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.491 [2024-12-06 11:23:04.194709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.491 [2024-12-06 11:23:04.194731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26aeba0 is same with the state(6) to be set 00:21:31.491 [2024-12-06 11:23:04.194760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269ec20 (9): Bad file descriptor 00:21:31.491 [2024-12-06 11:23:04.194777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266cf60 (9): Bad file descriptor 00:21:31.491 [2024-12-06 11:23:04.194792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2235120 (9): Bad file descriptor 00:21:31.491 [2024-12-06 11:23:04.194810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155610 (9): Bad file descriptor 00:21:31.491 [2024-12-06 11:23:04.194839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.491 [2024-12-06 11:23:04.194850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.491 [2024-12-06 11:23:04.194859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.492 [2024-12-06 11:23:04.194868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.194877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.492 [2024-12-06 11:23:04.194886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.194895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.492 [2024-12-06 11:23:04.194903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.194911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269ea40 is same with the state(6) to be set 00:21:31.492 [2024-12-06 11:23:04.194929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2240440 (9): Bad file descriptor 00:21:31.492 [2024-12-06 11:23:04.196240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.492 [2024-12-06 11:23:04.196935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.492 [2024-12-06 11:23:04.196943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.196957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.196966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.196976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.196985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.196994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.197483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.197492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.198908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:31.493 [2024-12-06 11:23:04.199015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.493 [2024-12-06 11:23:04.199234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.493 [2024-12-06 11:23:04.199242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.494 [2024-12-06 11:23:04.199946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.494 [2024-12-06 11:23:04.199955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.199965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.199973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.199984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.199993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.200276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.200284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.203280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:31.495 [2024-12-06 11:23:04.203322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:31.495 [2024-12-06 11:23:04.203477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.495 [2024-12-06 11:23:04.203499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266cf60 with addr=10.0.0.2, port=4420 00:21:31.495 [2024-12-06 11:23:04.203516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cf60 is same with the state(6) to be set 00:21:31.495 [2024-12-06 11:23:04.204481] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.495 [2024-12-06 11:23:04.204513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:31.495 [2024-12-06 11:23:04.204702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.495 [2024-12-06 11:23:04.204722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2665610 with addr=10.0.0.2, port=4420 00:21:31.495 [2024-12-06 11:23:04.204735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2665610 is same with the state(6) to be set 00:21:31.495 [2024-12-06 11:23:04.204816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.495 [2024-12-06 11:23:04.204832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22412e0 with addr=10.0.0.2, port=4420 00:21:31.495 [2024-12-06 11:23:04.204842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22412e0 is same with the state(6) to be set 00:21:31.495 [2024-12-06 11:23:04.204857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266cf60 (9): Bad file descriptor 00:21:31.495 [2024-12-06 11:23:04.205227] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.495 [2024-12-06 11:23:04.205286] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.495 [2024-12-06 11:23:04.205655] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.495 [2024-12-06 11:23:04.205711] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.495 [2024-12-06 11:23:04.205760] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:31.495 [2024-12-06 11:23:04.205984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.495 [2024-12-06 11:23:04.206009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2235120 with addr=10.0.0.2, port=4420 00:21:31.495 [2024-12-06 11:23:04.206020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2235120 is same with the state(6) to be set 00:21:31.495 [2024-12-06 11:23:04.206035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2665610 (9): Bad file descriptor 00:21:31.495 [2024-12-06 11:23:04.206049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22412e0 (9): Bad file descriptor 00:21:31.495 [2024-12-06 11:23:04.206069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:31.495 [2024-12-06 11:23:04.206081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:31.495 [2024-12-06 11:23:04.206092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:31.495 [2024-12-06 11:23:04.206104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:31.495 [2024-12-06 11:23:04.206131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26aeba0 (9): Bad file descriptor 00:21:31.495 [2024-12-06 11:23:04.206173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269ea40 (9): Bad file descriptor 00:21:31.495 [2024-12-06 11:23:04.206347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2235120 (9): Bad file descriptor 00:21:31.495 [2024-12-06 11:23:04.206364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:31.495 [2024-12-06 11:23:04.206375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:31.495 [2024-12-06 11:23:04.206384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:31.495 [2024-12-06 11:23:04.206399] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:31.495 [2024-12-06 11:23:04.206411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:31.495 [2024-12-06 11:23:04.206421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:31.495 [2024-12-06 11:23:04.206431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:31.495 [2024-12-06 11:23:04.206440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:31.495 [2024-12-06 11:23:04.206507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.206522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.206538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.206550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.206563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.206574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.206587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.206597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.206610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.495 [2024-12-06 11:23:04.206620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.495 [2024-12-06 11:23:04.206633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.206984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.206994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.496 [2024-12-06 11:23:04.207546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.496 [2024-12-06 11:23:04.207556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.207984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.207995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24463a0 is same with the state(6) to be set 00:21:31.497 [2024-12-06 11:23:04.209424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.497 [2024-12-06 11:23:04.209870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.497 [2024-12-06 11:23:04.209884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.209895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.209911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.209921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.209933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.209943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.209955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.209964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.209977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.209986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.209999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.498 [2024-12-06 11:23:04.210757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.498 [2024-12-06 11:23:04.210770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.210780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.210793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.210803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.210815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.210824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.210836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.210846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.210858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.210867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.210878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2618560 is same with the state(6) to be set 00:21:31.499 [2024-12-06 11:23:04.211894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.211906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.211916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.211923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.211931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.211939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.211947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.211953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.211961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.211969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.211978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.211985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.211993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.499 [2024-12-06 11:23:04.212353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.499 [2024-12-06 11:23:04.212360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.212879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.212886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2648e70 is same with the state(6) to be set 00:21:31.500 [2024-12-06 11:23:04.213871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.213884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.213895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.213903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.213912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.213919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.213928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.213935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.500 [2024-12-06 11:23:04.213944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.500 [2024-12-06 11:23:04.213951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.213965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.213972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.213981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.213988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.213996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.501 [2024-12-06 11:23:04.214570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.501 [2024-12-06 11:23:04.214576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.214857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.214864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485f00 is same with the state(6) to be set 00:21:31.502 [2024-12-06 11:23:04.215821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:31.502 [2024-12-06 11:23:04.215838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:31.502 [2024-12-06 11:23:04.215849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:31.502 [2024-12-06 11:23:04.215860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:31.502 [2024-12-06 11:23:04.215893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:31.502 [2024-12-06 11:23:04.215901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:31.502 [2024-12-06 11:23:04.215912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:31.502 [2024-12-06 11:23:04.215921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:31.502 [2024-12-06 11:23:04.216276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.502 [2024-12-06 11:23:04.216292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2235320 with addr=10.0.0.2, port=4420 00:21:31.502 [2024-12-06 11:23:04.216300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2235320 is same with the state(6) to be set 00:21:31.502 [2024-12-06 11:23:04.216426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.502 [2024-12-06 11:23:04.216436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2240440 with addr=10.0.0.2, port=4420 00:21:31.502 [2024-12-06 11:23:04.216444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240440 is same with the state(6) to be set 00:21:31.502 [2024-12-06 11:23:04.216597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.502 [2024-12-06 11:23:04.216607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155610 with addr=10.0.0.2, port=4420 00:21:31.502 [2024-12-06 11:23:04.216615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155610 is same with the state(6) to be set 00:21:31.502 [2024-12-06 11:23:04.216743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.502 [2024-12-06 11:23:04.216754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x269ec20 with addr=10.0.0.2, port=4420 00:21:31.502 [2024-12-06 11:23:04.216761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269ec20 is same with the state(6) to be set 00:21:31.502 [2024-12-06 11:23:04.217676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:31.502 [2024-12-06 11:23:04.217691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:31.502 [2024-12-06 11:23:04.217700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:31.502 [2024-12-06 11:23:04.217708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:31.502 [2024-12-06 11:23:04.217747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2235320 (9): Bad file descriptor 00:21:31.502 [2024-12-06 11:23:04.217757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2240440 (9): Bad file descriptor 00:21:31.502 [2024-12-06 11:23:04.217766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155610 (9): Bad file descriptor 00:21:31.502 [2024-12-06 11:23:04.217775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269ec20 (9): Bad file descriptor 00:21:31.502 [2024-12-06 11:23:04.217835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.217845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.217856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.217863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.217871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.217878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.217887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.217894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.217906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.217912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.217922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.217930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.217938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.502 [2024-12-06 11:23:04.217946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.502 [2024-12-06 11:23:04.217954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.217961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.217970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.217977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.217986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.217992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.218456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.218465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.225388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.225407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.503 [2024-12-06 11:23:04.225416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.503 [2024-12-06 11:23:04.225425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.225741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.225749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x358fcc0 is same with the state(6) to be set 00:21:31.504 [2024-12-06 11:23:04.226673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.504 [2024-12-06 11:23:04.226919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.504 [2024-12-06 11:23:04.226925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.226933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.226939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.226948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.226954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.226962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.226969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.226976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.226983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.226991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.226997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.505 [2024-12-06 11:23:04.227496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.505 [2024-12-06 11:23:04.227506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.506 [2024-12-06 11:23:04.227512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-12-06 11:23:04.227520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.506 [2024-12-06 11:23:04.227526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-12-06 11:23:04.227535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.506 [2024-12-06 11:23:04.227540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-12-06 11:23:04.227548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.506 [2024-12-06 11:23:04.227554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-12-06 11:23:04.227562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.506 [2024-12-06 11:23:04.227568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-12-06 11:23:04.227576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.506 [2024-12-06 11:23:04.227582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-12-06 11:23:04.227590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.506 [2024-12-06 11:23:04.227596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-12-06 11:23:04.227604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.506 [2024-12-06 11:23:04.227610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-12-06 11:23:04.227618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2484c70 is same with the state(6) to be set 00:21:31.506 [2024-12-06 11:23:04.228526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:31.506 task offset: 32768 on job bdev=Nvme4n1 fails 00:21:31.506 00:21:31.506 Latency(us) 00:21:31.506 [2024-12-06T10:23:04.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.506 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme1n1 ended in about 0.90 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme1n1 : 0.90 218.23 13.64 71.26 0.00 218876.83 5034.36 202089.19 00:21:31.506 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme2n1 ended in about 0.91 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme2n1 : 0.91 211.95 13.25 70.65 0.00 220638.37 13464.67 205902.20 00:21:31.506 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme3n1 ended in about 0.91 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme3n1 : 0.91 291.64 18.23 70.43 0.00 169414.00 12273.11 192556.68 00:21:31.506 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme4n1 ended in about 0.89 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme4n1 : 0.89 286.70 17.92 71.67 0.00 168219.09 12511.42 201135.94 00:21:31.506 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme5n1 ended in about 0.90 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme5n1 : 0.90 238.99 14.94 71.14 0.00 191199.84 22639.71 194463.19 00:21:31.506 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme6n1 ended in about 0.90 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme6n1 : 0.90 214.43 13.40 71.48 0.00 203792.06 17277.67 210668.45 00:21:31.506 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme7n1 : 0.91 216.34 13.52 70.28 0.00 200214.88 17635.14 210668.45 00:21:31.506 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme8n1 ended in about 0.92 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme8n1 : 0.92 207.92 12.99 69.31 0.00 203817.43 12928.47 202089.19 00:21:31.506 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme9n1 ended in about 0.93 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme9n1 : 0.93 207.50 12.97 69.17 0.00 200724.71 17396.83 219247.71 00:21:31.506 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.506 Job: Nvme10n1 ended in about 0.91 seconds with error 00:21:31.506 Verification LBA range: start 0x0 length 0x400 00:21:31.506 Nvme10n1 : 0.91 214.77 13.42 70.13 0.00 190979.81 6345.08 199229.44 00:21:31.506 [2024-12-06T10:23:04.444Z] =================================================================================================================== 00:21:31.506 [2024-12-06T10:23:04.444Z] Total : 2308.47 144.28 705.52 0.00 195370.60 5034.36 219247.71 00:21:31.506 [2024-12-06 11:23:04.259118] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:31.506 [2024-12-06 11:23:04.259162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:31.506 [2024-12-06 11:23:04.259459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.506 [2024-12-06 11:23:04.259479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266cf60 with addr=10.0.0.2, port=4420 00:21:31.506 [2024-12-06 11:23:04.259490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cf60 is same with the state(6) to be set 00:21:31.506 [2024-12-06 11:23:04.259749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.506 [2024-12-06 11:23:04.259762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22412e0 with addr=10.0.0.2, port=4420 00:21:31.506 [2024-12-06 11:23:04.259770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22412e0 is same with the state(6) to be set 00:21:31.506 [2024-12-06 11:23:04.259962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.506 [2024-12-06 11:23:04.259974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2665610 with addr=10.0.0.2, port=4420 00:21:31.506 [2024-12-06 11:23:04.259980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2665610 is same with the state(6) to be set 00:21:31.506 [2024-12-06 11:23:04.260122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.506 [2024-12-06 11:23:04.260133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2235120 with addr=10.0.0.2, port=4420 00:21:31.506 [2024-12-06 11:23:04.260140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2235120 is same with the state(6) to be set 00:21:31.506 [2024-12-06 11:23:04.260147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:31.506 [2024-12-06 11:23:04.260154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:31.506 [2024-12-06 11:23:04.260163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:31.506 [2024-12-06 11:23:04.260172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:31.506 [2024-12-06 11:23:04.260181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:31.506 [2024-12-06 11:23:04.260187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:31.506 [2024-12-06 11:23:04.260193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:31.506 [2024-12-06 11:23:04.260199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:31.506 [2024-12-06 11:23:04.260206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:31.506 [2024-12-06 11:23:04.260213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:31.506 [2024-12-06 11:23:04.260220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:31.506 [2024-12-06 11:23:04.260225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:31.506 [2024-12-06 11:23:04.260232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:31.506 [2024-12-06 11:23:04.260238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:31.506 [2024-12-06 11:23:04.260244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:31.506 [2024-12-06 11:23:04.260250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:31.506 [2024-12-06 11:23:04.260309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2235120 (9): Bad file descriptor 00:21:31.506 [2024-12-06 11:23:04.260325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2665610 (9): Bad file descriptor 00:21:31.506 [2024-12-06 11:23:04.260337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22412e0 (9): Bad file descriptor 00:21:31.506 [2024-12-06 11:23:04.260350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266cf60 (9): Bad file descriptor 00:21:31.506 [2024-12-06 11:23:04.260682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.506 [2024-12-06 11:23:04.260697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26aeba0 with addr=10.0.0.2, port=4420 00:21:31.506 [2024-12-06 11:23:04.260704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26aeba0 is same with the state(6) to be set 00:21:31.506 [2024-12-06 11:23:04.260768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.506 [2024-12-06 11:23:04.260778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x269ea40 with addr=10.0.0.2, port=4420 00:21:31.506 [2024-12-06 11:23:04.260785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269ea40 is same with the state(6) to be set 00:21:31.506 [2024-12-06 11:23:04.260807] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:31.506 [2024-12-06 11:23:04.260817] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:31.507 [2024-12-06 11:23:04.260827] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:31.507 [2024-12-06 11:23:04.260837] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:31.507 [2024-12-06 11:23:04.260846] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:31.507 [2024-12-06 11:23:04.260856] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:31.507 [2024-12-06 11:23:04.260866] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:31.507 [2024-12-06 11:23:04.260875] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:31.507 [2024-12-06 11:23:04.261311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:31.507 [2024-12-06 11:23:04.261323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:31.507 [2024-12-06 11:23:04.261331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:31.507 [2024-12-06 11:23:04.261338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:31.507 [2024-12-06 11:23:04.261379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26aeba0 (9): Bad file descriptor 00:21:31.507 [2024-12-06 11:23:04.261390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269ea40 (9): Bad file descriptor 00:21:31.507 [2024-12-06 11:23:04.261397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.261404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.261410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.261418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:31.507 [2024-12-06 11:23:04.261424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.261430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.261436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.261442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:31.507 [2024-12-06 11:23:04.261453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.261459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.261466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.261472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:31.507 [2024-12-06 11:23:04.261479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.261484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.261490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.261496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:31.507 [2024-12-06 11:23:04.261783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.507 [2024-12-06 11:23:04.261796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x269ec20 with addr=10.0.0.2, port=4420 00:21:31.507 [2024-12-06 11:23:04.261804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269ec20 is same with the state(6) to be set 00:21:31.507 [2024-12-06 11:23:04.261926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.507 [2024-12-06 11:23:04.261935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155610 with addr=10.0.0.2, port=4420 00:21:31.507 [2024-12-06 11:23:04.261942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155610 is same with the state(6) to be set 00:21:31.507 [2024-12-06 11:23:04.262092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.507 [2024-12-06 11:23:04.262102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2240440 with addr=10.0.0.2, port=4420 00:21:31.507 [2024-12-06 11:23:04.262109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240440 is same with the state(6) to be set 00:21:31.507 [2024-12-06 11:23:04.262336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.507 [2024-12-06 11:23:04.262348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2235320 with addr=10.0.0.2, port=4420 00:21:31.507 [2024-12-06 11:23:04.262354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2235320 is same with the state(6) to be set 00:21:31.507 [2024-12-06 11:23:04.262361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.262366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.262373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.262379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:31.507 [2024-12-06 11:23:04.262386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.262391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.262398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.262403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:31.507 [2024-12-06 11:23:04.262443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269ec20 (9): Bad file descriptor 00:21:31.507 [2024-12-06 11:23:04.262456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155610 (9): Bad file descriptor 00:21:31.507 [2024-12-06 11:23:04.262466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2240440 (9): Bad file descriptor 00:21:31.507 [2024-12-06 11:23:04.262474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2235320 (9): Bad file descriptor 00:21:31.507 [2024-12-06 11:23:04.262499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.262506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.262512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.262518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:31.507 [2024-12-06 11:23:04.262524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.262530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.262536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.262542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:31.507 [2024-12-06 11:23:04.262549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.262555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.262561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.262567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:31.507 [2024-12-06 11:23:04.262573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:31.507 [2024-12-06 11:23:04.262580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:31.507 [2024-12-06 11:23:04.262586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:31.507 [2024-12-06 11:23:04.262592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:31.766 11:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1782255 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1782255 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1782255 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.703 rmmod nvme_tcp 00:21:32.703 rmmod nvme_fabrics 00:21:32.703 rmmod nvme_keyring 00:21:32.703 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1781944 ']' 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1781944 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1781944 ']' 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1781944 00:21:32.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1781944) - No such process 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1781944 is not found' 00:21:32.961 Process with pid 1781944 is not found 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.961 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.865 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.865 00:21:34.865 real 0m7.713s 00:21:34.865 user 0m18.763s 00:21:34.865 sys 0m1.319s 00:21:34.865 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.865 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.865 ************************************ 00:21:34.865 END TEST nvmf_shutdown_tc3 00:21:34.865 ************************************ 00:21:34.865 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:34.865 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:34.865 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:34.865 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:34.865 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.865 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:35.125 ************************************ 00:21:35.125 START TEST nvmf_shutdown_tc4 00:21:35.125 ************************************ 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:35.125 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:35.125 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.125 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:35.126 Found net devices under 0000:af:00.0: cvl_0_0 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:35.126 Found net devices under 0000:af:00.1: cvl_0_1 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.126 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.126 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:21:35.385 00:21:35.385 --- 10.0.0.2 ping statistics --- 00:21:35.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.385 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:21:35.385 00:21:35.385 --- 10.0.0.1 ping statistics --- 00:21:35.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.385 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1783461 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1783461 00:21:35.385 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:35.386 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1783461 ']' 00:21:35.386 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.386 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.386 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.386 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.386 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:35.386 [2024-12-06 11:23:08.191423] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:35.386 [2024-12-06 11:23:08.191470] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.386 [2024-12-06 11:23:08.267539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.386 [2024-12-06 11:23:08.308207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.386 [2024-12-06 11:23:08.308239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.386 [2024-12-06 11:23:08.308246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.386 [2024-12-06 11:23:08.308251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.386 [2024-12-06 11:23:08.308256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.386 [2024-12-06 11:23:08.309678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.386 [2024-12-06 11:23:08.309794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.386 [2024-12-06 11:23:08.309906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.386 [2024-12-06 11:23:08.309907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:36.319 [2024-12-06 11:23:09.047932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.319 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.320 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:36.320 Malloc1 00:21:36.320 [2024-12-06 11:23:09.154532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.320 Malloc2 00:21:36.320 Malloc3 00:21:36.577 Malloc4 00:21:36.577 Malloc5 00:21:36.577 Malloc6 00:21:36.577 Malloc7 00:21:36.577 Malloc8 00:21:36.577 Malloc9 00:21:36.834 Malloc10 00:21:36.834 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.834 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:36.834 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.834 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:36.834 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1783752 00:21:36.834 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:36.834 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:36.834 [2024-12-06 11:23:09.657620] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1783461 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1783461 ']' 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1783461 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1783461 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1783461' 00:21:42.110 killing process with pid 1783461 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1783461 00:21:42.110 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1783461 00:21:42.110 [2024-12-06 11:23:14.651707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68510 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.651771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68510 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.651779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68510 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.651785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68510 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.651790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68510 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.651796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68510 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.651802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68510 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.651808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68510 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.658306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4700 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.658341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4700 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.658348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4700 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.658354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4700 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.658361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4700 is same with the state(6) to be set 00:21:42.110 [2024-12-06 11:23:14.658368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4700 is same with the state(6) to be set 00:21:42.110 Write completed with error (sct=0, sc=8) 00:21:42.110 Write completed with error (sct=0, sc=8) 00:21:42.110 Write completed with error (sct=0, sc=8) 00:21:42.110 Write completed with error (sct=0, sc=8) 00:21:42.110 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 [2024-12-06 11:23:14.662009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7db0 is same with tWrite completed with error (sct=0, sc=8) 00:21:42.111 he state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7db0 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7db0 is same with tWrite completed with error (sct=0, sc=8) 00:21:42.111 he state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7db0 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7db0 is same with the state(6) to be set 00:21:42.111 starting I/O failed: -6 00:21:42.111 [2024-12-06 11:23:14.662072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7db0 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7db0 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7db0 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7db0 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf82a0 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 [2024-12-06 11:23:14.662530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf82a0 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf82a0 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf82a0 is same with the state(6) to be set 00:21:42.111 starting I/O failed: -6 00:21:42.111 [2024-12-06 11:23:14.662551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf82a0 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 [2024-12-06 11:23:14.662841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 [2024-12-06 11:23:14.662867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 starting I/O failed: -6 00:21:42.111 [2024-12-06 11:23:14.662911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with tstarting I/O failed: -6 00:21:42.111 he state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 [2024-12-06 11:23:14.662941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.662947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8770 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 [2024-12-06 11:23:14.663026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.111 starting I/O failed: -6 00:21:42.111 [2024-12-06 11:23:14.663246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf78e0 is same with the state(6) to be set 00:21:42.111 Write completed with error (sct=0, sc=8) 00:21:42.112 [2024-12-06 11:23:14.663269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf78e0 is same with the state(6) to be set 00:21:42.112 [2024-12-06 11:23:14.663276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf78e0 is same with the state(6) to be set 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 [2024-12-06 11:23:14.663282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf78e0 is same with the state(6) to be set 00:21:42.112 starting I/O failed: -6 00:21:42.112 [2024-12-06 11:23:14.663289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf78e0 is same with the state(6) to be set 00:21:42.112 [2024-12-06 11:23:14.663298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf78e0 is same with the state(6) to be set 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 [2024-12-06 11:23:14.663304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf78e0 is same with the state(6) to be set 00:21:42.112 starting I/O failed: -6 00:21:42.112 [2024-12-06 11:23:14.663310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf78e0 is same with the state(6) to be set 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 [2024-12-06 11:23:14.663963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 [2024-12-06 11:23:14.665687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.112 NVMe io qpair process completion error 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.112 starting I/O failed: -6 00:21:42.112 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.666477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 [2024-12-06 11:23:14.666721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.666739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with the state(6) to be set 00:21:42.113 [2024-12-06 11:23:14.666746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.666753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with the state(6) to be set 00:21:42.113 [2024-12-06 11:23:14.666760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.666767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with tstarting I/O failed: -6 00:21:42.113 he state(6) to be set 00:21:42.113 [2024-12-06 11:23:14.666774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with the state(6) to be set 00:21:42.113 [2024-12-06 11:23:14.666780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.666786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with tstarting I/O failed: -6 00:21:42.113 he state(6) to be set 00:21:42.113 [2024-12-06 11:23:14.666793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9110 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.667070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65150 is same with tWrite completed with error (sct=0, sc=8) 00:21:42.113 he state(6) to be set 00:21:42.113 starting I/O failed: -6 00:21:42.113 [2024-12-06 11:23:14.667090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65150 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.667097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65150 is same with the state(6) to be set 00:21:42.113 starting I/O failed: -6 00:21:42.113 [2024-12-06 11:23:14.667104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65150 is same with the state(6) to be set 00:21:42.113 [2024-12-06 11:23:14.667110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65150 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.667320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 [2024-12-06 11:23:14.667591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8c40 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.667609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8c40 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.667616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8c40 is same with the state(6) to be set 00:21:42.113 starting I/O failed: -6 00:21:42.113 [2024-12-06 11:23:14.667624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8c40 is same with the state(6) to be set 00:21:42.113 [2024-12-06 11:23:14.667631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8c40 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 [2024-12-06 11:23:14.667637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8c40 is same with tstarting I/O failed: -6 00:21:42.113 he state(6) to be set 00:21:42.113 [2024-12-06 11:23:14.667644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8c40 is same with the state(6) to be set 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 Write completed with error (sct=0, sc=8) 00:21:42.113 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 [2024-12-06 11:23:14.668069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65e90 is same with the state(6) to be set 00:21:42.114 [2024-12-06 11:23:14.668082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65e90 is same with the state(6) to be set 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 [2024-12-06 11:23:14.668088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65e90 is same with the state(6) to be set 00:21:42.114 [2024-12-06 11:23:14.668094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65e90 is same with the state(6) to be set 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 [2024-12-06 11:23:14.668101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65e90 is same with the state(6) to be set 00:21:42.114 [2024-12-06 11:23:14.668108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65e90 is same with the state(6) to be set 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 [2024-12-06 11:23:14.668239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 [2024-12-06 11:23:14.668733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66830 is same with tWrite completed with error (sct=0, sc=8) 00:21:42.114 he state(6) to be set 00:21:42.114 starting I/O failed: -6 00:21:42.114 [2024-12-06 11:23:14.668751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66830 is same with the state(6) to be set 00:21:42.114 [2024-12-06 11:23:14.668758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66830 is same with the state(6) to be set 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 [2024-12-06 11:23:14.668765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66830 is same with the state(6) to be set 00:21:42.114 [2024-12-06 11:23:14.668772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66830 is same with the state(6) to be set 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 [2024-12-06 11:23:14.668778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66830 is same with the state(6) to be set 00:21:42.114 starting I/O failed: -6 00:21:42.114 [2024-12-06 11:23:14.668785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66830 is same with the state(6) to be set 00:21:42.114 [2024-12-06 11:23:14.668791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66830 is same with the state(6) to be set 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 [2024-12-06 11:23:14.669943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.114 NVMe io qpair process completion error 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 starting I/O failed: -6 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.114 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 [2024-12-06 11:23:14.670848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 [2024-12-06 11:23:14.671661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 Write completed with error (sct=0, sc=8) 00:21:42.115 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 [2024-12-06 11:23:14.672615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 [2024-12-06 11:23:14.674206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.116 NVMe io qpair process completion error 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 [2024-12-06 11:23:14.675099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.116 starting I/O failed: -6 00:21:42.116 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 [2024-12-06 11:23:14.675924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 [2024-12-06 11:23:14.676824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.117 starting I/O failed: -6 00:21:42.117 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 [2024-12-06 11:23:14.678267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.118 NVMe io qpair process completion error 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 [2024-12-06 11:23:14.679260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 [2024-12-06 11:23:14.680085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.118 starting I/O failed: -6 00:21:42.118 starting I/O failed: -6 00:21:42.118 starting I/O failed: -6 00:21:42.118 starting I/O failed: -6 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 starting I/O failed: -6 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.118 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 [2024-12-06 11:23:14.681190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 [2024-12-06 11:23:14.685881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.119 NVMe io qpair process completion error 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 [2024-12-06 11:23:14.686965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 Write completed with error (sct=0, sc=8) 00:21:42.119 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 [2024-12-06 11:23:14.687802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 [2024-12-06 11:23:14.688730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.120 starting I/O failed: -6 00:21:42.120 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 [2024-12-06 11:23:14.693267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.121 NVMe io qpair process completion error 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 [2024-12-06 11:23:14.694342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 [2024-12-06 11:23:14.695203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.121 Write completed with error (sct=0, sc=8) 00:21:42.121 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 [2024-12-06 11:23:14.696108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 [2024-12-06 11:23:14.697811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.122 NVMe io qpair process completion error 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.122 starting I/O failed: -6 00:21:42.122 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 [2024-12-06 11:23:14.698861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 [2024-12-06 11:23:14.699664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 [2024-12-06 11:23:14.700678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.123 starting I/O failed: -6 00:21:42.123 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 [2024-12-06 11:23:14.702174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.124 NVMe io qpair process completion error 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 [2024-12-06 11:23:14.703140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 [2024-12-06 11:23:14.703979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.124 starting I/O failed: -6 00:21:42.124 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 [2024-12-06 11:23:14.704954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.125 starting I/O failed: -6 00:21:42.125 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 [2024-12-06 11:23:14.711347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.126 NVMe io qpair process completion error 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 [2024-12-06 11:23:14.712427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 [2024-12-06 11:23:14.713286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.126 Write completed with error (sct=0, sc=8) 00:21:42.126 starting I/O failed: -6 00:21:42.127 [2024-12-06 11:23:14.714226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 Write completed with error (sct=0, sc=8) 00:21:42.127 starting I/O failed: -6 00:21:42.127 [2024-12-06 11:23:14.716428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.127 NVMe io qpair process completion error 00:21:42.127 Initializing NVMe Controllers 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:42.127 Controller IO queue size 128, less than required. 00:21:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:42.127 Initialization complete. Launching workers. 00:21:42.127 ======================================================== 00:21:42.127 Latency(us) 00:21:42.127 Device Information : IOPS MiB/s Average min max 00:21:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2403.23 103.26 53264.62 774.39 101703.41 00:21:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2419.67 103.97 52913.51 765.82 100708.55 00:21:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2407.29 103.44 53233.78 758.71 116285.67 00:21:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2417.32 103.87 53056.51 800.49 115170.17 00:21:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2412.84 103.68 53167.78 755.33 115777.45 00:21:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2374.40 102.03 54047.82 450.76 92661.74 00:21:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2409.00 103.51 53328.05 815.99 113619.87 00:21:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2415.40 103.79 52575.17 682.14 87325.32 00:21:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2418.18 103.91 52515.24 754.33 87158.71 00:21:42.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2385.51 102.50 53244.73 602.48 87117.68 00:21:42.128 ======================================================== 00:21:42.128 Total : 24062.84 1033.95 53132.76 450.76 116285.67 00:21:42.128 00:21:42.128 [2024-12-06 11:23:14.719381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf46b0 is same with the state(6) to be set 00:21:42.128 [2024-12-06 11:23:14.719421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf3060 is same with the state(6) to be set 00:21:42.128 [2024-12-06 11:23:14.719449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf36c0 is same with the state(6) to be set 00:21:42.128 [2024-12-06 11:23:14.719473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf49e0 is same with the state(6) to be set 00:21:42.128 [2024-12-06 11:23:14.719499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf3390 is same with the state(6) to be set 00:21:42.128 [2024-12-06 11:23:14.719528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf4050 is same with the state(6) to be set 00:21:42.128 [2024-12-06 11:23:14.719553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf4380 is same with the state(6) to be set 00:21:42.128 [2024-12-06 11:23:14.719579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf5540 is same with the state(6) to be set 00:21:42.128 [2024-12-06 11:23:14.719604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf5360 is same with the state(6) to be set 00:21:42.128 [2024-12-06 11:23:14.719630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf39f0 is same with the state(6) to be set 00:21:42.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:42.128 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1783752 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1783752 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1783752 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:43.506 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.507 rmmod nvme_tcp 00:21:43.507 rmmod nvme_fabrics 00:21:43.507 rmmod nvme_keyring 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1783461 ']' 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1783461 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1783461 ']' 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1783461 00:21:43.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1783461) - No such process 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1783461 is not found' 00:21:43.507 Process with pid 1783461 is not found 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.507 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.427 00:21:45.427 real 0m10.404s 00:21:45.427 user 0m27.518s 00:21:45.427 sys 0m5.146s 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:45.427 ************************************ 00:21:45.427 END TEST nvmf_shutdown_tc4 00:21:45.427 ************************************ 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:45.427 00:21:45.427 real 0m41.545s 00:21:45.427 user 1m42.950s 00:21:45.427 sys 0m13.864s 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.427 ************************************ 00:21:45.427 END TEST nvmf_shutdown 00:21:45.427 ************************************ 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.427 ************************************ 00:21:45.427 START TEST nvmf_nsid 00:21:45.427 ************************************ 00:21:45.427 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:45.687 * Looking for test storage... 00:21:45.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:45.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.687 --rc genhtml_branch_coverage=1 00:21:45.687 --rc genhtml_function_coverage=1 00:21:45.687 --rc genhtml_legend=1 00:21:45.687 --rc geninfo_all_blocks=1 00:21:45.687 --rc geninfo_unexecuted_blocks=1 00:21:45.687 00:21:45.687 ' 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:45.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.687 --rc genhtml_branch_coverage=1 00:21:45.687 --rc genhtml_function_coverage=1 00:21:45.687 --rc genhtml_legend=1 00:21:45.687 --rc geninfo_all_blocks=1 00:21:45.687 --rc geninfo_unexecuted_blocks=1 00:21:45.687 00:21:45.687 ' 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:45.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.687 --rc genhtml_branch_coverage=1 00:21:45.687 --rc genhtml_function_coverage=1 00:21:45.687 --rc genhtml_legend=1 00:21:45.687 --rc geninfo_all_blocks=1 00:21:45.687 --rc geninfo_unexecuted_blocks=1 00:21:45.687 00:21:45.687 ' 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:45.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.687 --rc genhtml_branch_coverage=1 00:21:45.687 --rc genhtml_function_coverage=1 00:21:45.687 --rc genhtml_legend=1 00:21:45.687 --rc geninfo_all_blocks=1 00:21:45.687 --rc geninfo_unexecuted_blocks=1 00:21:45.687 00:21:45.687 ' 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.687 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.688 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:52.260 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.260 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:52.261 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:52.261 Found net devices under 0000:af:00.0: cvl_0_0 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:52.261 Found net devices under 0000:af:00.1: cvl_0_1 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:21:52.261 00:21:52.261 --- 10.0.0.2 ping statistics --- 00:21:52.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.261 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:21:52.261 00:21:52.261 --- 10.0.0.1 ping statistics --- 00:21:52.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.261 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1788528 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1788528 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1788528 ']' 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.261 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:52.261 [2024-12-06 11:23:24.582774] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:52.261 [2024-12-06 11:23:24.582821] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.261 [2024-12-06 11:23:24.661436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.261 [2024-12-06 11:23:24.698457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.261 [2024-12-06 11:23:24.698494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.262 [2024-12-06 11:23:24.698500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.262 [2024-12-06 11:23:24.698505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.262 [2024-12-06 11:23:24.698510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.262 [2024-12-06 11:23:24.699045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1788805 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=6f8a0ca7-abba-4d3e-b8b2-442621fdbc4f 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:52.519 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=34217739-5b43-43c4-80cf-8b82c815a7e0 00:21:52.777 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:52.777 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8482512c-ca74-4a5d-9580-4fa4c86006c5 00:21:52.777 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:52.778 null0 00:21:52.778 null1 00:21:52.778 null2 00:21:52.778 [2024-12-06 11:23:25.488246] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:21:52.778 [2024-12-06 11:23:25.488289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1788805 ] 00:21:52.778 [2024-12-06 11:23:25.492100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.778 [2024-12-06 11:23:25.516301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1788805 /var/tmp/tgt2.sock 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1788805 ']' 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:52.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.778 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:52.778 [2024-12-06 11:23:25.561482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.778 [2024-12-06 11:23:25.603885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.035 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.035 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:53.035 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:53.293 [2024-12-06 11:23:26.099047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.293 [2024-12-06 11:23:26.115162] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:53.293 nvme0n1 nvme0n2 00:21:53.293 nvme1n1 00:21:53.293 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:53.293 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:53.293 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:54.668 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 6f8a0ca7-abba-4d3e-b8b2-442621fdbc4f 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6f8a0ca7abba4d3eb8b2442621fdbc4f 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6F8A0CA7ABBA4D3EB8B2442621FDBC4F 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 6F8A0CA7ABBA4D3EB8B2442621FDBC4F == \6\F\8\A\0\C\A\7\A\B\B\A\4\D\3\E\B\8\B\2\4\4\2\6\2\1\F\D\B\C\4\F ]] 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 34217739-5b43-43c4-80cf-8b82c815a7e0 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:55.605 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=342177395b4343c480cf8b82c815a7e0 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 342177395B4343C480CF8B82C815A7E0 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 342177395B4343C480CF8B82C815A7E0 == \3\4\2\1\7\7\3\9\5\B\4\3\4\3\C\4\8\0\C\F\8\B\8\2\C\8\1\5\A\7\E\0 ]] 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8482512c-ca74-4a5d-9580-4fa4c86006c5 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8482512cca744a5d95804fa4c86006c5 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8482512CCA744A5D95804FA4C86006C5 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8482512CCA744A5D95804FA4C86006C5 == \8\4\8\2\5\1\2\C\C\A\7\4\4\A\5\D\9\5\8\0\4\F\A\4\C\8\6\0\0\6\C\5 ]] 00:21:55.864 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1788805 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1788805 ']' 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1788805 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1788805 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1788805' 00:21:56.123 killing process with pid 1788805 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1788805 00:21:56.123 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1788805 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:56.381 rmmod nvme_tcp 00:21:56.381 rmmod nvme_fabrics 00:21:56.381 rmmod nvme_keyring 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1788528 ']' 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1788528 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1788528 ']' 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1788528 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1788528 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1788528' 00:21:56.381 killing process with pid 1788528 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1788528 00:21:56.381 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1788528 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.639 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.174 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.174 00:21:59.174 real 0m13.213s 00:21:59.174 user 0m10.714s 00:21:59.174 sys 0m5.556s 00:21:59.174 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.174 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:59.174 ************************************ 00:21:59.174 END TEST nvmf_nsid 00:21:59.174 ************************************ 00:21:59.174 11:23:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:59.174 00:21:59.174 real 12m8.596s 00:21:59.174 user 26m10.753s 00:21:59.174 sys 3m45.175s 00:21:59.174 11:23:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.174 11:23:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:59.174 ************************************ 00:21:59.174 END TEST nvmf_target_extra 00:21:59.174 ************************************ 00:21:59.174 11:23:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:59.174 11:23:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.174 11:23:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.174 11:23:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.174 ************************************ 00:21:59.174 START TEST nvmf_host 00:21:59.174 ************************************ 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:59.174 * Looking for test storage... 00:21:59.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.174 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:59.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.174 --rc genhtml_branch_coverage=1 00:21:59.175 --rc genhtml_function_coverage=1 00:21:59.175 --rc genhtml_legend=1 00:21:59.175 --rc geninfo_all_blocks=1 00:21:59.175 --rc geninfo_unexecuted_blocks=1 00:21:59.175 00:21:59.175 ' 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:59.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.175 --rc genhtml_branch_coverage=1 00:21:59.175 --rc genhtml_function_coverage=1 00:21:59.175 --rc genhtml_legend=1 00:21:59.175 --rc geninfo_all_blocks=1 00:21:59.175 --rc geninfo_unexecuted_blocks=1 00:21:59.175 00:21:59.175 ' 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:59.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.175 --rc genhtml_branch_coverage=1 00:21:59.175 --rc genhtml_function_coverage=1 00:21:59.175 --rc genhtml_legend=1 00:21:59.175 --rc geninfo_all_blocks=1 00:21:59.175 --rc geninfo_unexecuted_blocks=1 00:21:59.175 00:21:59.175 ' 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:59.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.175 --rc genhtml_branch_coverage=1 00:21:59.175 --rc genhtml_function_coverage=1 00:21:59.175 --rc genhtml_legend=1 00:21:59.175 --rc geninfo_all_blocks=1 00:21:59.175 --rc geninfo_unexecuted_blocks=1 00:21:59.175 00:21:59.175 ' 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.175 ************************************ 00:21:59.175 START TEST nvmf_multicontroller 00:21:59.175 ************************************ 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:59.175 * Looking for test storage... 00:21:59.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:59.175 11:23:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.175 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.176 --rc genhtml_branch_coverage=1 00:21:59.176 --rc genhtml_function_coverage=1 00:21:59.176 --rc genhtml_legend=1 00:21:59.176 --rc geninfo_all_blocks=1 00:21:59.176 --rc geninfo_unexecuted_blocks=1 00:21:59.176 00:21:59.176 ' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.176 --rc genhtml_branch_coverage=1 00:21:59.176 --rc genhtml_function_coverage=1 00:21:59.176 --rc genhtml_legend=1 00:21:59.176 --rc geninfo_all_blocks=1 00:21:59.176 --rc geninfo_unexecuted_blocks=1 00:21:59.176 00:21:59.176 ' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.176 --rc genhtml_branch_coverage=1 00:21:59.176 --rc genhtml_function_coverage=1 00:21:59.176 --rc genhtml_legend=1 00:21:59.176 --rc geninfo_all_blocks=1 00:21:59.176 --rc geninfo_unexecuted_blocks=1 00:21:59.176 00:21:59.176 ' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.176 --rc genhtml_branch_coverage=1 00:21:59.176 --rc genhtml_function_coverage=1 00:21:59.176 --rc genhtml_legend=1 00:21:59.176 --rc geninfo_all_blocks=1 00:21:59.176 --rc geninfo_unexecuted_blocks=1 00:21:59.176 00:21:59.176 ' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.176 11:23:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:05.741 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.741 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:05.742 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:05.742 Found net devices under 0000:af:00.0: cvl_0_0 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:05.742 Found net devices under 0000:af:00.1: cvl_0_1 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.742 11:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:22:05.742 00:22:05.742 --- 10.0.0.2 ping statistics --- 00:22:05.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.742 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:22:05.742 00:22:05.742 --- 10.0.0.1 ping statistics --- 00:22:05.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.742 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1793197 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1793197 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1793197 ']' 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.742 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.742 [2024-12-06 11:23:38.175269] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:22:05.742 [2024-12-06 11:23:38.175312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.742 [2024-12-06 11:23:38.250029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:05.742 [2024-12-06 11:23:38.289375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.742 [2024-12-06 11:23:38.289410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.742 [2024-12-06 11:23:38.289416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.742 [2024-12-06 11:23:38.289422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.742 [2024-12-06 11:23:38.289427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.742 [2024-12-06 11:23:38.290812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.742 [2024-12-06 11:23:38.290926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.742 [2024-12-06 11:23:38.290927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.312 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.312 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:06.312 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.312 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.312 11:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 [2024-12-06 11:23:39.040713] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 Malloc0 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 [2024-12-06 11:23:39.109300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 [2024-12-06 11:23:39.117219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 Malloc1 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:06.312 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1793477 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1793477 /var/tmp/bdevperf.sock 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1793477 ']' 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.313 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.572 NVMe0n1 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.572 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.572 1 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.832 request: 00:22:06.832 { 00:22:06.832 "name": "NVMe0", 00:22:06.832 "trtype": "tcp", 00:22:06.832 "traddr": "10.0.0.2", 00:22:06.832 "adrfam": "ipv4", 00:22:06.832 "trsvcid": "4420", 00:22:06.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.832 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:06.832 "hostaddr": "10.0.0.1", 00:22:06.832 "prchk_reftag": false, 00:22:06.832 "prchk_guard": false, 00:22:06.832 "hdgst": false, 00:22:06.832 "ddgst": false, 00:22:06.832 "allow_unrecognized_csi": false, 00:22:06.832 "method": "bdev_nvme_attach_controller", 00:22:06.832 "req_id": 1 00:22:06.832 } 00:22:06.832 Got JSON-RPC error response 00:22:06.832 response: 00:22:06.832 { 00:22:06.832 "code": -114, 00:22:06.832 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:06.832 } 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.832 request: 00:22:06.832 { 00:22:06.832 "name": "NVMe0", 00:22:06.832 "trtype": "tcp", 00:22:06.832 "traddr": "10.0.0.2", 00:22:06.832 "adrfam": "ipv4", 00:22:06.832 "trsvcid": "4420", 00:22:06.832 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:06.832 "hostaddr": "10.0.0.1", 00:22:06.832 "prchk_reftag": false, 00:22:06.832 "prchk_guard": false, 00:22:06.832 "hdgst": false, 00:22:06.832 "ddgst": false, 00:22:06.832 "allow_unrecognized_csi": false, 00:22:06.832 "method": "bdev_nvme_attach_controller", 00:22:06.832 "req_id": 1 00:22:06.832 } 00:22:06.832 Got JSON-RPC error response 00:22:06.832 response: 00:22:06.832 { 00:22:06.832 "code": -114, 00:22:06.832 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:06.832 } 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.832 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.832 request: 00:22:06.832 { 00:22:06.832 "name": "NVMe0", 00:22:06.832 "trtype": "tcp", 00:22:06.832 "traddr": "10.0.0.2", 00:22:06.832 "adrfam": "ipv4", 00:22:06.832 "trsvcid": "4420", 00:22:06.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.832 "hostaddr": "10.0.0.1", 00:22:06.832 "prchk_reftag": false, 00:22:06.833 "prchk_guard": false, 00:22:06.833 "hdgst": false, 00:22:06.833 "ddgst": false, 00:22:06.833 "multipath": "disable", 00:22:06.833 "allow_unrecognized_csi": false, 00:22:06.833 "method": "bdev_nvme_attach_controller", 00:22:06.833 "req_id": 1 00:22:06.833 } 00:22:06.833 Got JSON-RPC error response 00:22:06.833 response: 00:22:06.833 { 00:22:06.833 "code": -114, 00:22:06.833 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:06.833 } 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.833 request: 00:22:06.833 { 00:22:06.833 "name": "NVMe0", 00:22:06.833 "trtype": "tcp", 00:22:06.833 "traddr": "10.0.0.2", 00:22:06.833 "adrfam": "ipv4", 00:22:06.833 "trsvcid": "4420", 00:22:06.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.833 "hostaddr": "10.0.0.1", 00:22:06.833 "prchk_reftag": false, 00:22:06.833 "prchk_guard": false, 00:22:06.833 "hdgst": false, 00:22:06.833 "ddgst": false, 00:22:06.833 "multipath": "failover", 00:22:06.833 "allow_unrecognized_csi": false, 00:22:06.833 "method": "bdev_nvme_attach_controller", 00:22:06.833 "req_id": 1 00:22:06.833 } 00:22:06.833 Got JSON-RPC error response 00:22:06.833 response: 00:22:06.833 { 00:22:06.833 "code": -114, 00:22:06.833 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:06.833 } 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.833 NVMe0n1 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.833 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.092 00:22:07.092 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.092 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:07.092 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:07.092 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.092 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.092 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.092 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:07.092 11:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:08.028 { 00:22:08.028 "results": [ 00:22:08.028 { 00:22:08.028 "job": "NVMe0n1", 00:22:08.028 "core_mask": "0x1", 00:22:08.028 "workload": "write", 00:22:08.028 "status": "finished", 00:22:08.028 "queue_depth": 128, 00:22:08.028 "io_size": 4096, 00:22:08.028 "runtime": 1.005156, 00:22:08.028 "iops": 27290.291258272347, 00:22:08.028 "mibps": 106.60270022762636, 00:22:08.028 "io_failed": 0, 00:22:08.028 "io_timeout": 0, 00:22:08.028 "avg_latency_us": 4680.220369124514, 00:22:08.028 "min_latency_us": 1906.5018181818182, 00:22:08.028 "max_latency_us": 9711.243636363637 00:22:08.028 } 00:22:08.028 ], 00:22:08.028 "core_count": 1 00:22:08.028 } 00:22:08.028 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:08.028 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.028 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.287 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.287 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:08.287 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1793477 00:22:08.287 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1793477 ']' 00:22:08.287 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1793477 00:22:08.287 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:08.287 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.287 11:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1793477 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1793477' 00:22:08.287 killing process with pid 1793477 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1793477 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1793477 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:08.287 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:08.547 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:08.547 [2024-12-06 11:23:39.218342] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:22:08.547 [2024-12-06 11:23:39.218390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1793477 ] 00:22:08.547 [2024-12-06 11:23:39.291626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.547 [2024-12-06 11:23:39.331291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.547 [2024-12-06 11:23:39.822042] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 962a3c0d-3a18-4022-a78b-785173d657d3 already exists 00:22:08.547 [2024-12-06 11:23:39.822077] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:962a3c0d-3a18-4022-a78b-785173d657d3 alias for bdev NVMe1n1 00:22:08.547 [2024-12-06 11:23:39.822085] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:08.547 Running I/O for 1 seconds... 00:22:08.547 27240.00 IOPS, 106.41 MiB/s 00:22:08.547 Latency(us) 00:22:08.547 [2024-12-06T10:23:41.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.547 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:08.547 NVMe0n1 : 1.01 27290.29 106.60 0.00 0.00 4680.22 1906.50 9711.24 00:22:08.547 [2024-12-06T10:23:41.485Z] =================================================================================================================== 00:22:08.547 [2024-12-06T10:23:41.485Z] Total : 27290.29 106.60 0.00 0.00 4680.22 1906.50 9711.24 00:22:08.547 Received shutdown signal, test time was about 1.000000 seconds 00:22:08.547 00:22:08.547 Latency(us) 00:22:08.547 [2024-12-06T10:23:41.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.547 [2024-12-06T10:23:41.485Z] =================================================================================================================== 00:22:08.547 [2024-12-06T10:23:41.485Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.547 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.547 rmmod nvme_tcp 00:22:08.547 rmmod nvme_fabrics 00:22:08.547 rmmod nvme_keyring 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1793197 ']' 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1793197 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1793197 ']' 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1793197 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.547 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1793197 00:22:08.548 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:08.548 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:08.548 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1793197' 00:22:08.548 killing process with pid 1793197 00:22:08.548 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1793197 00:22:08.548 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1793197 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.807 11:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.713 11:23:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:10.713 00:22:10.713 real 0m11.720s 00:22:10.713 user 0m13.708s 00:22:10.713 sys 0m5.264s 00:22:10.713 11:23:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.713 11:23:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:10.713 ************************************ 00:22:10.713 END TEST nvmf_multicontroller 00:22:10.713 ************************************ 00:22:10.713 11:23:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:10.713 11:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:10.713 11:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.713 11:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.973 ************************************ 00:22:10.973 START TEST nvmf_aer 00:22:10.973 ************************************ 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:10.973 * Looking for test storage... 00:22:10.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:10.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.973 --rc genhtml_branch_coverage=1 00:22:10.973 --rc genhtml_function_coverage=1 00:22:10.973 --rc genhtml_legend=1 00:22:10.973 --rc geninfo_all_blocks=1 00:22:10.973 --rc geninfo_unexecuted_blocks=1 00:22:10.973 00:22:10.973 ' 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:10.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.973 --rc genhtml_branch_coverage=1 00:22:10.973 --rc genhtml_function_coverage=1 00:22:10.973 --rc genhtml_legend=1 00:22:10.973 --rc geninfo_all_blocks=1 00:22:10.973 --rc geninfo_unexecuted_blocks=1 00:22:10.973 00:22:10.973 ' 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:10.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.973 --rc genhtml_branch_coverage=1 00:22:10.973 --rc genhtml_function_coverage=1 00:22:10.973 --rc genhtml_legend=1 00:22:10.973 --rc geninfo_all_blocks=1 00:22:10.973 --rc geninfo_unexecuted_blocks=1 00:22:10.973 00:22:10.973 ' 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:10.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.973 --rc genhtml_branch_coverage=1 00:22:10.973 --rc genhtml_function_coverage=1 00:22:10.973 --rc genhtml_legend=1 00:22:10.973 --rc geninfo_all_blocks=1 00:22:10.973 --rc geninfo_unexecuted_blocks=1 00:22:10.973 00:22:10.973 ' 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.973 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:10.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.974 11:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.539 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:17.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:17.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:17.540 Found net devices under 0000:af:00.0: cvl_0_0 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:17.540 Found net devices under 0000:af:00.1: cvl_0_1 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:22:17.540 00:22:17.540 --- 10.0.0.2 ping statistics --- 00:22:17.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.540 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:22:17.540 00:22:17.540 --- 10.0.0.1 ping statistics --- 00:22:17.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.540 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1797484 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1797484 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1797484 ']' 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.540 11:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:17.540 [2024-12-06 11:23:49.905597] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:22:17.540 [2024-12-06 11:23:49.905639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.540 [2024-12-06 11:23:49.979454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.540 [2024-12-06 11:23:50.031563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.540 [2024-12-06 11:23:50.031597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.540 [2024-12-06 11:23:50.031604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.540 [2024-12-06 11:23:50.031611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.540 [2024-12-06 11:23:50.031616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.540 [2024-12-06 11:23:50.032938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.540 [2024-12-06 11:23:50.033054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.540 [2024-12-06 11:23:50.033088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.540 [2024-12-06 11:23:50.033089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.799 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.800 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:17.800 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.800 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.800 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.058 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.058 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.059 [2024-12-06 11:23:50.752781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.059 Malloc0 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.059 [2024-12-06 11:23:50.820124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.059 [ 00:22:18.059 { 00:22:18.059 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:18.059 "subtype": "Discovery", 00:22:18.059 "listen_addresses": [], 00:22:18.059 "allow_any_host": true, 00:22:18.059 "hosts": [] 00:22:18.059 }, 00:22:18.059 { 00:22:18.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.059 "subtype": "NVMe", 00:22:18.059 "listen_addresses": [ 00:22:18.059 { 00:22:18.059 "trtype": "TCP", 00:22:18.059 "adrfam": "IPv4", 00:22:18.059 "traddr": "10.0.0.2", 00:22:18.059 "trsvcid": "4420" 00:22:18.059 } 00:22:18.059 ], 00:22:18.059 "allow_any_host": true, 00:22:18.059 "hosts": [], 00:22:18.059 "serial_number": "SPDK00000000000001", 00:22:18.059 "model_number": "SPDK bdev Controller", 00:22:18.059 "max_namespaces": 2, 00:22:18.059 "min_cntlid": 1, 00:22:18.059 "max_cntlid": 65519, 00:22:18.059 "namespaces": [ 00:22:18.059 { 00:22:18.059 "nsid": 1, 00:22:18.059 "bdev_name": "Malloc0", 00:22:18.059 "name": "Malloc0", 00:22:18.059 "nguid": "BC5F539C3AF24F4D93A36ECEA58D24C9", 00:22:18.059 "uuid": "bc5f539c-3af2-4f4d-93a3-6ecea58d24c9" 00:22:18.059 } 00:22:18.059 ] 00:22:18.059 } 00:22:18.059 ] 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1797621 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:18.059 11:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.318 Malloc1 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.318 Asynchronous Event Request test 00:22:18.318 Attaching to 10.0.0.2 00:22:18.318 Attached to 10.0.0.2 00:22:18.318 Registering asynchronous event callbacks... 00:22:18.318 Starting namespace attribute notice tests for all controllers... 00:22:18.318 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:18.318 aer_cb - Changed Namespace 00:22:18.318 Cleaning up... 00:22:18.318 [ 00:22:18.318 { 00:22:18.318 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:18.318 "subtype": "Discovery", 00:22:18.318 "listen_addresses": [], 00:22:18.318 "allow_any_host": true, 00:22:18.318 "hosts": [] 00:22:18.318 }, 00:22:18.318 { 00:22:18.318 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.318 "subtype": "NVMe", 00:22:18.318 "listen_addresses": [ 00:22:18.318 { 00:22:18.318 "trtype": "TCP", 00:22:18.318 "adrfam": "IPv4", 00:22:18.318 "traddr": "10.0.0.2", 00:22:18.318 "trsvcid": "4420" 00:22:18.318 } 00:22:18.318 ], 00:22:18.318 "allow_any_host": true, 00:22:18.318 "hosts": [], 00:22:18.318 "serial_number": "SPDK00000000000001", 00:22:18.318 "model_number": "SPDK bdev Controller", 00:22:18.318 "max_namespaces": 2, 00:22:18.318 "min_cntlid": 1, 00:22:18.318 "max_cntlid": 65519, 00:22:18.318 "namespaces": [ 00:22:18.318 { 00:22:18.318 "nsid": 1, 00:22:18.318 "bdev_name": "Malloc0", 00:22:18.318 "name": "Malloc0", 00:22:18.318 "nguid": "BC5F539C3AF24F4D93A36ECEA58D24C9", 00:22:18.318 "uuid": "bc5f539c-3af2-4f4d-93a3-6ecea58d24c9" 00:22:18.318 }, 00:22:18.318 { 00:22:18.318 "nsid": 2, 00:22:18.318 "bdev_name": "Malloc1", 00:22:18.318 "name": "Malloc1", 00:22:18.318 "nguid": "FEA6737E51754024A1464F083473231D", 00:22:18.318 "uuid": "fea6737e-5175-4024-a146-4f083473231d" 00:22:18.318 } 00:22:18.318 ] 00:22:18.318 } 00:22:18.318 ] 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1797621 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:18.318 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.319 rmmod nvme_tcp 00:22:18.319 rmmod nvme_fabrics 00:22:18.319 rmmod nvme_keyring 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1797484 ']' 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1797484 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1797484 ']' 00:22:18.319 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1797484 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1797484 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1797484' 00:22:18.577 killing process with pid 1797484 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1797484 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1797484 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.577 11:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.194 00:22:21.194 real 0m9.863s 00:22:21.194 user 0m7.697s 00:22:21.194 sys 0m4.840s 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.194 ************************************ 00:22:21.194 END TEST nvmf_aer 00:22:21.194 ************************************ 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.194 ************************************ 00:22:21.194 START TEST nvmf_async_init 00:22:21.194 ************************************ 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:21.194 * Looking for test storage... 00:22:21.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.194 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.195 --rc genhtml_branch_coverage=1 00:22:21.195 --rc genhtml_function_coverage=1 00:22:21.195 --rc genhtml_legend=1 00:22:21.195 --rc geninfo_all_blocks=1 00:22:21.195 --rc geninfo_unexecuted_blocks=1 00:22:21.195 00:22:21.195 ' 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.195 --rc genhtml_branch_coverage=1 00:22:21.195 --rc genhtml_function_coverage=1 00:22:21.195 --rc genhtml_legend=1 00:22:21.195 --rc geninfo_all_blocks=1 00:22:21.195 --rc geninfo_unexecuted_blocks=1 00:22:21.195 00:22:21.195 ' 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.195 --rc genhtml_branch_coverage=1 00:22:21.195 --rc genhtml_function_coverage=1 00:22:21.195 --rc genhtml_legend=1 00:22:21.195 --rc geninfo_all_blocks=1 00:22:21.195 --rc geninfo_unexecuted_blocks=1 00:22:21.195 00:22:21.195 ' 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.195 --rc genhtml_branch_coverage=1 00:22:21.195 --rc genhtml_function_coverage=1 00:22:21.195 --rc genhtml_legend=1 00:22:21.195 --rc geninfo_all_blocks=1 00:22:21.195 --rc geninfo_unexecuted_blocks=1 00:22:21.195 00:22:21.195 ' 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.195 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=15ba30dbe3ab48ab98b79b22f73a8835 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.196 11:23:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.815 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:27.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:27.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:27.816 Found net devices under 0000:af:00.0: cvl_0_0 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:27.816 Found net devices under 0000:af:00.1: cvl_0_1 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:27.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:22:27.816 00:22:27.816 --- 10.0.0.2 ping statistics --- 00:22:27.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.816 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:22:27.816 00:22:27.816 --- 10.0.0.1 ping statistics --- 00:22:27.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.816 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:27.816 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1801309 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1801309 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1801309 ']' 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.817 11:23:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.817 [2024-12-06 11:23:59.916139] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:22:27.817 [2024-12-06 11:23:59.916177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.817 [2024-12-06 11:23:59.992632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.817 [2024-12-06 11:24:00.041510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.817 [2024-12-06 11:24:00.041545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.817 [2024-12-06 11:24:00.041553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.817 [2024-12-06 11:24:00.041560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.817 [2024-12-06 11:24:00.041565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.817 [2024-12-06 11:24:00.042083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.817 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.817 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:27.817 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.817 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:27.817 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.076 [2024-12-06 11:24:00.779337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.076 null0 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 15ba30dbe3ab48ab98b79b22f73a8835 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.076 [2024-12-06 11:24:00.827583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.076 11:24:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.335 nvme0n1 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.335 [ 00:22:28.335 { 00:22:28.335 "name": "nvme0n1", 00:22:28.335 "aliases": [ 00:22:28.335 "15ba30db-e3ab-48ab-98b7-9b22f73a8835" 00:22:28.335 ], 00:22:28.335 "product_name": "NVMe disk", 00:22:28.335 "block_size": 512, 00:22:28.335 "num_blocks": 2097152, 00:22:28.335 "uuid": "15ba30db-e3ab-48ab-98b7-9b22f73a8835", 00:22:28.335 "numa_id": 1, 00:22:28.335 "assigned_rate_limits": { 00:22:28.335 "rw_ios_per_sec": 0, 00:22:28.335 "rw_mbytes_per_sec": 0, 00:22:28.335 "r_mbytes_per_sec": 0, 00:22:28.335 "w_mbytes_per_sec": 0 00:22:28.335 }, 00:22:28.335 "claimed": false, 00:22:28.335 "zoned": false, 00:22:28.335 "supported_io_types": { 00:22:28.335 "read": true, 00:22:28.335 "write": true, 00:22:28.335 "unmap": false, 00:22:28.335 "flush": true, 00:22:28.335 "reset": true, 00:22:28.335 "nvme_admin": true, 00:22:28.335 "nvme_io": true, 00:22:28.335 "nvme_io_md": false, 00:22:28.335 "write_zeroes": true, 00:22:28.335 "zcopy": false, 00:22:28.335 "get_zone_info": false, 00:22:28.335 "zone_management": false, 00:22:28.335 "zone_append": false, 00:22:28.335 "compare": true, 00:22:28.335 "compare_and_write": true, 00:22:28.335 "abort": true, 00:22:28.335 "seek_hole": false, 00:22:28.335 "seek_data": false, 00:22:28.335 "copy": true, 00:22:28.335 "nvme_iov_md": false 00:22:28.335 }, 00:22:28.335 "memory_domains": [ 00:22:28.335 { 00:22:28.335 "dma_device_id": "system", 00:22:28.335 "dma_device_type": 1 00:22:28.335 } 00:22:28.335 ], 00:22:28.335 "driver_specific": { 00:22:28.335 "nvme": [ 00:22:28.335 { 00:22:28.335 "trid": { 00:22:28.335 "trtype": "TCP", 00:22:28.335 "adrfam": "IPv4", 00:22:28.335 "traddr": "10.0.0.2", 00:22:28.335 "trsvcid": "4420", 00:22:28.335 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:28.335 }, 00:22:28.335 "ctrlr_data": { 00:22:28.335 "cntlid": 1, 00:22:28.335 "vendor_id": "0x8086", 00:22:28.335 "model_number": "SPDK bdev Controller", 00:22:28.335 "serial_number": "00000000000000000000", 00:22:28.335 "firmware_revision": "25.01", 00:22:28.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:28.335 "oacs": { 00:22:28.335 "security": 0, 00:22:28.335 "format": 0, 00:22:28.335 "firmware": 0, 00:22:28.335 "ns_manage": 0 00:22:28.335 }, 00:22:28.335 "multi_ctrlr": true, 00:22:28.335 "ana_reporting": false 00:22:28.335 }, 00:22:28.335 "vs": { 00:22:28.335 "nvme_version": "1.3" 00:22:28.335 }, 00:22:28.335 "ns_data": { 00:22:28.335 "id": 1, 00:22:28.335 "can_share": true 00:22:28.335 } 00:22:28.335 } 00:22:28.335 ], 00:22:28.335 "mp_policy": "active_passive" 00:22:28.335 } 00:22:28.335 } 00:22:28.335 ] 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.335 [2024-12-06 11:24:01.092110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:28.335 [2024-12-06 11:24:01.092161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b89a80 (9): Bad file descriptor 00:22:28.335 [2024-12-06 11:24:01.265131] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.335 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.595 [ 00:22:28.595 { 00:22:28.595 "name": "nvme0n1", 00:22:28.595 "aliases": [ 00:22:28.595 "15ba30db-e3ab-48ab-98b7-9b22f73a8835" 00:22:28.595 ], 00:22:28.595 "product_name": "NVMe disk", 00:22:28.595 "block_size": 512, 00:22:28.595 "num_blocks": 2097152, 00:22:28.595 "uuid": "15ba30db-e3ab-48ab-98b7-9b22f73a8835", 00:22:28.595 "numa_id": 1, 00:22:28.595 "assigned_rate_limits": { 00:22:28.595 "rw_ios_per_sec": 0, 00:22:28.595 "rw_mbytes_per_sec": 0, 00:22:28.595 "r_mbytes_per_sec": 0, 00:22:28.595 "w_mbytes_per_sec": 0 00:22:28.595 }, 00:22:28.595 "claimed": false, 00:22:28.595 "zoned": false, 00:22:28.595 "supported_io_types": { 00:22:28.595 "read": true, 00:22:28.595 "write": true, 00:22:28.595 "unmap": false, 00:22:28.595 "flush": true, 00:22:28.595 "reset": true, 00:22:28.595 "nvme_admin": true, 00:22:28.595 "nvme_io": true, 00:22:28.595 "nvme_io_md": false, 00:22:28.595 "write_zeroes": true, 00:22:28.595 "zcopy": false, 00:22:28.595 "get_zone_info": false, 00:22:28.595 "zone_management": false, 00:22:28.595 "zone_append": false, 00:22:28.595 "compare": true, 00:22:28.595 "compare_and_write": true, 00:22:28.595 "abort": true, 00:22:28.595 "seek_hole": false, 00:22:28.595 "seek_data": false, 00:22:28.595 "copy": true, 00:22:28.595 "nvme_iov_md": false 00:22:28.595 }, 00:22:28.595 "memory_domains": [ 00:22:28.595 { 00:22:28.595 "dma_device_id": "system", 00:22:28.595 "dma_device_type": 1 00:22:28.595 } 00:22:28.595 ], 00:22:28.595 "driver_specific": { 00:22:28.595 "nvme": [ 00:22:28.595 { 00:22:28.595 "trid": { 00:22:28.595 "trtype": "TCP", 00:22:28.595 "adrfam": "IPv4", 00:22:28.595 "traddr": "10.0.0.2", 00:22:28.595 "trsvcid": "4420", 00:22:28.595 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:28.595 }, 00:22:28.595 "ctrlr_data": { 00:22:28.595 "cntlid": 2, 00:22:28.595 "vendor_id": "0x8086", 00:22:28.595 "model_number": "SPDK bdev Controller", 00:22:28.595 "serial_number": "00000000000000000000", 00:22:28.595 "firmware_revision": "25.01", 00:22:28.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:28.595 "oacs": { 00:22:28.595 "security": 0, 00:22:28.595 "format": 0, 00:22:28.595 "firmware": 0, 00:22:28.595 "ns_manage": 0 00:22:28.595 }, 00:22:28.595 "multi_ctrlr": true, 00:22:28.595 "ana_reporting": false 00:22:28.595 }, 00:22:28.595 "vs": { 00:22:28.595 "nvme_version": "1.3" 00:22:28.595 }, 00:22:28.595 "ns_data": { 00:22:28.595 "id": 1, 00:22:28.595 "can_share": true 00:22:28.595 } 00:22:28.595 } 00:22:28.595 ], 00:22:28.595 "mp_policy": "active_passive" 00:22:28.595 } 00:22:28.595 } 00:22:28.595 ] 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.b3kxhHdBes 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.b3kxhHdBes 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.b3kxhHdBes 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.595 [2024-12-06 11:24:01.340836] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.595 [2024-12-06 11:24:01.340928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.595 [2024-12-06 11:24:01.360901] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.595 nvme0n1 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.595 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.595 [ 00:22:28.595 { 00:22:28.595 "name": "nvme0n1", 00:22:28.595 "aliases": [ 00:22:28.595 "15ba30db-e3ab-48ab-98b7-9b22f73a8835" 00:22:28.595 ], 00:22:28.595 "product_name": "NVMe disk", 00:22:28.595 "block_size": 512, 00:22:28.595 "num_blocks": 2097152, 00:22:28.595 "uuid": "15ba30db-e3ab-48ab-98b7-9b22f73a8835", 00:22:28.595 "numa_id": 1, 00:22:28.595 "assigned_rate_limits": { 00:22:28.595 "rw_ios_per_sec": 0, 00:22:28.595 "rw_mbytes_per_sec": 0, 00:22:28.595 "r_mbytes_per_sec": 0, 00:22:28.595 "w_mbytes_per_sec": 0 00:22:28.595 }, 00:22:28.595 "claimed": false, 00:22:28.595 "zoned": false, 00:22:28.595 "supported_io_types": { 00:22:28.595 "read": true, 00:22:28.595 "write": true, 00:22:28.595 "unmap": false, 00:22:28.595 "flush": true, 00:22:28.595 "reset": true, 00:22:28.595 "nvme_admin": true, 00:22:28.595 "nvme_io": true, 00:22:28.595 "nvme_io_md": false, 00:22:28.595 "write_zeroes": true, 00:22:28.595 "zcopy": false, 00:22:28.595 "get_zone_info": false, 00:22:28.595 "zone_management": false, 00:22:28.595 "zone_append": false, 00:22:28.595 "compare": true, 00:22:28.595 "compare_and_write": true, 00:22:28.595 "abort": true, 00:22:28.595 "seek_hole": false, 00:22:28.595 "seek_data": false, 00:22:28.595 "copy": true, 00:22:28.595 "nvme_iov_md": false 00:22:28.595 }, 00:22:28.595 "memory_domains": [ 00:22:28.595 { 00:22:28.595 "dma_device_id": "system", 00:22:28.595 "dma_device_type": 1 00:22:28.595 } 00:22:28.595 ], 00:22:28.595 "driver_specific": { 00:22:28.595 "nvme": [ 00:22:28.595 { 00:22:28.595 "trid": { 00:22:28.595 "trtype": "TCP", 00:22:28.595 "adrfam": "IPv4", 00:22:28.595 "traddr": "10.0.0.2", 00:22:28.595 "trsvcid": "4421", 00:22:28.595 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:28.595 }, 00:22:28.595 "ctrlr_data": { 00:22:28.595 "cntlid": 3, 00:22:28.595 "vendor_id": "0x8086", 00:22:28.595 "model_number": "SPDK bdev Controller", 00:22:28.595 "serial_number": "00000000000000000000", 00:22:28.595 "firmware_revision": "25.01", 00:22:28.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:28.595 "oacs": { 00:22:28.595 "security": 0, 00:22:28.595 "format": 0, 00:22:28.595 "firmware": 0, 00:22:28.595 "ns_manage": 0 00:22:28.595 }, 00:22:28.595 "multi_ctrlr": true, 00:22:28.595 "ana_reporting": false 00:22:28.595 }, 00:22:28.595 "vs": { 00:22:28.595 "nvme_version": "1.3" 00:22:28.596 }, 00:22:28.596 "ns_data": { 00:22:28.596 "id": 1, 00:22:28.596 "can_share": true 00:22:28.596 } 00:22:28.596 } 00:22:28.596 ], 00:22:28.596 "mp_policy": "active_passive" 00:22:28.596 } 00:22:28.596 } 00:22:28.596 ] 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.b3kxhHdBes 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.596 rmmod nvme_tcp 00:22:28.596 rmmod nvme_fabrics 00:22:28.596 rmmod nvme_keyring 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1801309 ']' 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1801309 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1801309 ']' 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1801309 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:28.596 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.855 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801309 00:22:28.855 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.855 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.855 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801309' 00:22:28.855 killing process with pid 1801309 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1801309 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1801309 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.856 11:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.394 00:22:31.394 real 0m10.175s 00:22:31.394 user 0m3.818s 00:22:31.394 sys 0m4.944s 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.394 ************************************ 00:22:31.394 END TEST nvmf_async_init 00:22:31.394 ************************************ 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.394 ************************************ 00:22:31.394 START TEST dma 00:22:31.394 ************************************ 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:31.394 * Looking for test storage... 00:22:31.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:22:31.394 11:24:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:31.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.394 --rc genhtml_branch_coverage=1 00:22:31.394 --rc genhtml_function_coverage=1 00:22:31.394 --rc genhtml_legend=1 00:22:31.394 --rc geninfo_all_blocks=1 00:22:31.394 --rc geninfo_unexecuted_blocks=1 00:22:31.394 00:22:31.394 ' 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:31.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.394 --rc genhtml_branch_coverage=1 00:22:31.394 --rc genhtml_function_coverage=1 00:22:31.394 --rc genhtml_legend=1 00:22:31.394 --rc geninfo_all_blocks=1 00:22:31.394 --rc geninfo_unexecuted_blocks=1 00:22:31.394 00:22:31.394 ' 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:31.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.394 --rc genhtml_branch_coverage=1 00:22:31.394 --rc genhtml_function_coverage=1 00:22:31.394 --rc genhtml_legend=1 00:22:31.394 --rc geninfo_all_blocks=1 00:22:31.394 --rc geninfo_unexecuted_blocks=1 00:22:31.394 00:22:31.394 ' 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:31.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.394 --rc genhtml_branch_coverage=1 00:22:31.394 --rc genhtml_function_coverage=1 00:22:31.394 --rc genhtml_legend=1 00:22:31.394 --rc geninfo_all_blocks=1 00:22:31.394 --rc geninfo_unexecuted_blocks=1 00:22:31.394 00:22:31.394 ' 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.394 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:31.395 00:22:31.395 real 0m0.206s 00:22:31.395 user 0m0.122s 00:22:31.395 sys 0m0.098s 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:31.395 ************************************ 00:22:31.395 END TEST dma 00:22:31.395 ************************************ 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.395 ************************************ 00:22:31.395 START TEST nvmf_identify 00:22:31.395 ************************************ 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:31.395 * Looking for test storage... 00:22:31.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:31.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.395 --rc genhtml_branch_coverage=1 00:22:31.395 --rc genhtml_function_coverage=1 00:22:31.395 --rc genhtml_legend=1 00:22:31.395 --rc geninfo_all_blocks=1 00:22:31.395 --rc geninfo_unexecuted_blocks=1 00:22:31.395 00:22:31.395 ' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:31.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.395 --rc genhtml_branch_coverage=1 00:22:31.395 --rc genhtml_function_coverage=1 00:22:31.395 --rc genhtml_legend=1 00:22:31.395 --rc geninfo_all_blocks=1 00:22:31.395 --rc geninfo_unexecuted_blocks=1 00:22:31.395 00:22:31.395 ' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:31.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.395 --rc genhtml_branch_coverage=1 00:22:31.395 --rc genhtml_function_coverage=1 00:22:31.395 --rc genhtml_legend=1 00:22:31.395 --rc geninfo_all_blocks=1 00:22:31.395 --rc geninfo_unexecuted_blocks=1 00:22:31.395 00:22:31.395 ' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:31.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.395 --rc genhtml_branch_coverage=1 00:22:31.395 --rc genhtml_function_coverage=1 00:22:31.395 --rc genhtml_legend=1 00:22:31.395 --rc geninfo_all_blocks=1 00:22:31.395 --rc geninfo_unexecuted_blocks=1 00:22:31.395 00:22:31.395 ' 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.395 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.655 11:24:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:38.221 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:38.221 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.221 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:38.222 Found net devices under 0000:af:00.0: cvl_0_0 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:38.222 Found net devices under 0000:af:00.1: cvl_0_1 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:38.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:22:38.222 00:22:38.222 --- 10.0.0.2 ping statistics --- 00:22:38.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.222 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:22:38.222 00:22:38.222 --- 10.0.0.1 ping statistics --- 00:22:38.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.222 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1805825 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1805825 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1805825 ']' 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.222 11:24:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.222 [2024-12-06 11:24:10.399712] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:22:38.222 [2024-12-06 11:24:10.399755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.222 [2024-12-06 11:24:10.473980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.222 [2024-12-06 11:24:10.515471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.222 [2024-12-06 11:24:10.515506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.222 [2024-12-06 11:24:10.515512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.222 [2024-12-06 11:24:10.515518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.222 [2024-12-06 11:24:10.515523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.222 [2024-12-06 11:24:10.516955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.222 [2024-12-06 11:24:10.517086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.223 [2024-12-06 11:24:10.517151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.223 [2024-12-06 11:24:10.517152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.482 [2024-12-06 11:24:11.211341] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.482 Malloc0 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.482 [2024-12-06 11:24:11.313264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.482 [ 00:22:38.482 { 00:22:38.482 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:38.482 "subtype": "Discovery", 00:22:38.482 "listen_addresses": [ 00:22:38.482 { 00:22:38.482 "trtype": "TCP", 00:22:38.482 "adrfam": "IPv4", 00:22:38.482 "traddr": "10.0.0.2", 00:22:38.482 "trsvcid": "4420" 00:22:38.482 } 00:22:38.482 ], 00:22:38.482 "allow_any_host": true, 00:22:38.482 "hosts": [] 00:22:38.482 }, 00:22:38.482 { 00:22:38.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.482 "subtype": "NVMe", 00:22:38.482 "listen_addresses": [ 00:22:38.482 { 00:22:38.482 "trtype": "TCP", 00:22:38.482 "adrfam": "IPv4", 00:22:38.482 "traddr": "10.0.0.2", 00:22:38.482 "trsvcid": "4420" 00:22:38.482 } 00:22:38.482 ], 00:22:38.482 "allow_any_host": true, 00:22:38.482 "hosts": [], 00:22:38.482 "serial_number": "SPDK00000000000001", 00:22:38.482 "model_number": "SPDK bdev Controller", 00:22:38.482 "max_namespaces": 32, 00:22:38.482 "min_cntlid": 1, 00:22:38.482 "max_cntlid": 65519, 00:22:38.482 "namespaces": [ 00:22:38.482 { 00:22:38.482 "nsid": 1, 00:22:38.482 "bdev_name": "Malloc0", 00:22:38.482 "name": "Malloc0", 00:22:38.482 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:38.482 "eui64": "ABCDEF0123456789", 00:22:38.482 "uuid": "d8704c91-905a-48e7-a855-095e00adc563" 00:22:38.482 } 00:22:38.482 ] 00:22:38.482 } 00:22:38.482 ] 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.482 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:38.482 [2024-12-06 11:24:11.366323] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:22:38.482 [2024-12-06 11:24:11.366360] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806104 ] 00:22:38.482 [2024-12-06 11:24:11.404361] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:38.482 [2024-12-06 11:24:11.404405] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:38.483 [2024-12-06 11:24:11.404410] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:38.483 [2024-12-06 11:24:11.404422] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:38.483 [2024-12-06 11:24:11.404430] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:38.483 [2024-12-06 11:24:11.408383] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:38.483 [2024-12-06 11:24:11.408417] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c5a550 0 00:22:38.483 [2024-12-06 11:24:11.416072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:38.483 [2024-12-06 11:24:11.416091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:38.483 [2024-12-06 11:24:11.416095] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:38.483 [2024-12-06 11:24:11.416098] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:38.483 [2024-12-06 11:24:11.416132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.483 [2024-12-06 11:24:11.416138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.483 [2024-12-06 11:24:11.416142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.483 [2024-12-06 11:24:11.416154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:38.483 [2024-12-06 11:24:11.416171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.749 [2024-12-06 11:24:11.424066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.749 [2024-12-06 11:24:11.424077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.749 [2024-12-06 11:24:11.424081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.749 [2024-12-06 11:24:11.424085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.749 [2024-12-06 11:24:11.424094] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:38.749 [2024-12-06 11:24:11.424101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:38.749 [2024-12-06 11:24:11.424106] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:38.749 [2024-12-06 11:24:11.424120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.749 [2024-12-06 11:24:11.424124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.749 [2024-12-06 11:24:11.424127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.749 [2024-12-06 11:24:11.424135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.749 [2024-12-06 11:24:11.424147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.749 [2024-12-06 11:24:11.424311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.749 [2024-12-06 11:24:11.424317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.749 [2024-12-06 11:24:11.424320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.749 [2024-12-06 11:24:11.424323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.749 [2024-12-06 11:24:11.424328] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:38.749 [2024-12-06 11:24:11.424336] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:38.749 [2024-12-06 11:24:11.424342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.749 [2024-12-06 11:24:11.424345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.749 [2024-12-06 11:24:11.424348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.749 [2024-12-06 11:24:11.424353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.749 [2024-12-06 11:24:11.424363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.749 [2024-12-06 11:24:11.424423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.749 [2024-12-06 11:24:11.424428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.749 [2024-12-06 11:24:11.424431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.749 [2024-12-06 11:24:11.424434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.749 [2024-12-06 11:24:11.424439] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:38.749 [2024-12-06 11:24:11.424445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:38.749 [2024-12-06 11:24:11.424450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.749 [2024-12-06 11:24:11.424453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.749 [2024-12-06 11:24:11.424456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.424461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.750 [2024-12-06 11:24:11.424470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.750 [2024-12-06 11:24:11.424524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.750 [2024-12-06 11:24:11.424529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.750 [2024-12-06 11:24:11.424532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.750 [2024-12-06 11:24:11.424539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:38.750 [2024-12-06 11:24:11.424546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.424557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.750 [2024-12-06 11:24:11.424566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.750 [2024-12-06 11:24:11.424621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.750 [2024-12-06 11:24:11.424626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.750 [2024-12-06 11:24:11.424629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.750 [2024-12-06 11:24:11.424635] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:38.750 [2024-12-06 11:24:11.424640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:38.750 [2024-12-06 11:24:11.424647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:38.750 [2024-12-06 11:24:11.424757] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:38.750 [2024-12-06 11:24:11.424761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:38.750 [2024-12-06 11:24:11.424768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.424779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.750 [2024-12-06 11:24:11.424788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.750 [2024-12-06 11:24:11.424846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.750 [2024-12-06 11:24:11.424851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.750 [2024-12-06 11:24:11.424853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.750 [2024-12-06 11:24:11.424860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:38.750 [2024-12-06 11:24:11.424867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.424878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.750 [2024-12-06 11:24:11.424886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.750 [2024-12-06 11:24:11.424943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.750 [2024-12-06 11:24:11.424948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.750 [2024-12-06 11:24:11.424951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.750 [2024-12-06 11:24:11.424957] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:38.750 [2024-12-06 11:24:11.424961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:38.750 [2024-12-06 11:24:11.424967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:38.750 [2024-12-06 11:24:11.424978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:38.750 [2024-12-06 11:24:11.424985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.424988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.424993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.750 [2024-12-06 11:24:11.425002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.750 [2024-12-06 11:24:11.425092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.750 [2024-12-06 11:24:11.425099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.750 [2024-12-06 11:24:11.425102] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.425105] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c5a550): datao=0, datal=4096, cccid=0 00:22:38.750 [2024-12-06 11:24:11.425109] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbc100) on tqpair(0x1c5a550): expected_datao=0, payload_size=4096 00:22:38.750 [2024-12-06 11:24:11.425113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.425119] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.425122] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.750 [2024-12-06 11:24:11.466226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.750 [2024-12-06 11:24:11.466229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.750 [2024-12-06 11:24:11.466240] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:38.750 [2024-12-06 11:24:11.466247] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:38.750 [2024-12-06 11:24:11.466252] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:38.750 [2024-12-06 11:24:11.466256] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:38.750 [2024-12-06 11:24:11.466261] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:38.750 [2024-12-06 11:24:11.466265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:38.750 [2024-12-06 11:24:11.466273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:38.750 [2024-12-06 11:24:11.466279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.466292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.750 [2024-12-06 11:24:11.466304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.750 [2024-12-06 11:24:11.466365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.750 [2024-12-06 11:24:11.466370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.750 [2024-12-06 11:24:11.466373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.750 [2024-12-06 11:24:11.466382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.466392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.750 [2024-12-06 11:24:11.466397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.466407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.750 [2024-12-06 11:24:11.466414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.466424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.750 [2024-12-06 11:24:11.466429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.466439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.750 [2024-12-06 11:24:11.466443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:38.750 [2024-12-06 11:24:11.466452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:38.750 [2024-12-06 11:24:11.466458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.750 [2024-12-06 11:24:11.466461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c5a550) 00:22:38.750 [2024-12-06 11:24:11.466465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.751 [2024-12-06 11:24:11.466476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc100, cid 0, qid 0 00:22:38.751 [2024-12-06 11:24:11.466480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc280, cid 1, qid 0 00:22:38.751 [2024-12-06 11:24:11.466484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc400, cid 2, qid 0 00:22:38.751 [2024-12-06 11:24:11.466487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.751 [2024-12-06 11:24:11.466491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc700, cid 4, qid 0 00:22:38.751 [2024-12-06 11:24:11.466577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.751 [2024-12-06 11:24:11.466582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.751 [2024-12-06 11:24:11.466584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc700) on tqpair=0x1c5a550 00:22:38.751 [2024-12-06 11:24:11.466591] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:38.751 [2024-12-06 11:24:11.466595] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:38.751 [2024-12-06 11:24:11.466604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c5a550) 00:22:38.751 [2024-12-06 11:24:11.466612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.751 [2024-12-06 11:24:11.466621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc700, cid 4, qid 0 00:22:38.751 [2024-12-06 11:24:11.466684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.751 [2024-12-06 11:24:11.466689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.751 [2024-12-06 11:24:11.466692] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466694] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c5a550): datao=0, datal=4096, cccid=4 00:22:38.751 [2024-12-06 11:24:11.466700] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbc700) on tqpair(0x1c5a550): expected_datao=0, payload_size=4096 00:22:38.751 [2024-12-06 11:24:11.466704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466714] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466717] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.751 [2024-12-06 11:24:11.466753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.751 [2024-12-06 11:24:11.466755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc700) on tqpair=0x1c5a550 00:22:38.751 [2024-12-06 11:24:11.466768] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:38.751 [2024-12-06 11:24:11.466786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c5a550) 00:22:38.751 [2024-12-06 11:24:11.466795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.751 [2024-12-06 11:24:11.466800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c5a550) 00:22:38.751 [2024-12-06 11:24:11.466811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.751 [2024-12-06 11:24:11.466823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc700, cid 4, qid 0 00:22:38.751 [2024-12-06 11:24:11.466827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc880, cid 5, qid 0 00:22:38.751 [2024-12-06 11:24:11.466924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.751 [2024-12-06 11:24:11.466929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.751 [2024-12-06 11:24:11.466932] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466934] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c5a550): datao=0, datal=1024, cccid=4 00:22:38.751 [2024-12-06 11:24:11.466938] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbc700) on tqpair(0x1c5a550): expected_datao=0, payload_size=1024 00:22:38.751 [2024-12-06 11:24:11.466941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466946] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466949] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.751 [2024-12-06 11:24:11.466958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.751 [2024-12-06 11:24:11.466960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.466963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc880) on tqpair=0x1c5a550 00:22:38.751 [2024-12-06 11:24:11.508066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.751 [2024-12-06 11:24:11.508077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.751 [2024-12-06 11:24:11.508080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc700) on tqpair=0x1c5a550 00:22:38.751 [2024-12-06 11:24:11.508094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c5a550) 00:22:38.751 [2024-12-06 11:24:11.508103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.751 [2024-12-06 11:24:11.508121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc700, cid 4, qid 0 00:22:38.751 [2024-12-06 11:24:11.508276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.751 [2024-12-06 11:24:11.508281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.751 [2024-12-06 11:24:11.508284] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508287] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c5a550): datao=0, datal=3072, cccid=4 00:22:38.751 [2024-12-06 11:24:11.508290] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbc700) on tqpair(0x1c5a550): expected_datao=0, payload_size=3072 00:22:38.751 [2024-12-06 11:24:11.508294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508308] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508312] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.751 [2024-12-06 11:24:11.508372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.751 [2024-12-06 11:24:11.508375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc700) on tqpair=0x1c5a550 00:22:38.751 [2024-12-06 11:24:11.508385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c5a550) 00:22:38.751 [2024-12-06 11:24:11.508393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.751 [2024-12-06 11:24:11.508406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc700, cid 4, qid 0 00:22:38.751 [2024-12-06 11:24:11.508475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.751 [2024-12-06 11:24:11.508480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.751 [2024-12-06 11:24:11.508483] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508485] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c5a550): datao=0, datal=8, cccid=4 00:22:38.751 [2024-12-06 11:24:11.508489] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbc700) on tqpair(0x1c5a550): expected_datao=0, payload_size=8 00:22:38.751 [2024-12-06 11:24:11.508492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508497] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.508500] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.554067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.751 [2024-12-06 11:24:11.554078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.751 [2024-12-06 11:24:11.554081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.751 [2024-12-06 11:24:11.554085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc700) on tqpair=0x1c5a550 00:22:38.751 ===================================================== 00:22:38.751 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:38.751 ===================================================== 00:22:38.751 Controller Capabilities/Features 00:22:38.751 ================================ 00:22:38.751 Vendor ID: 0000 00:22:38.751 Subsystem Vendor ID: 0000 00:22:38.751 Serial Number: .................... 00:22:38.751 Model Number: ........................................ 00:22:38.751 Firmware Version: 25.01 00:22:38.751 Recommended Arb Burst: 0 00:22:38.751 IEEE OUI Identifier: 00 00 00 00:22:38.751 Multi-path I/O 00:22:38.751 May have multiple subsystem ports: No 00:22:38.751 May have multiple controllers: No 00:22:38.751 Associated with SR-IOV VF: No 00:22:38.751 Max Data Transfer Size: 131072 00:22:38.751 Max Number of Namespaces: 0 00:22:38.751 Max Number of I/O Queues: 1024 00:22:38.751 NVMe Specification Version (VS): 1.3 00:22:38.751 NVMe Specification Version (Identify): 1.3 00:22:38.751 Maximum Queue Entries: 128 00:22:38.751 Contiguous Queues Required: Yes 00:22:38.751 Arbitration Mechanisms Supported 00:22:38.751 Weighted Round Robin: Not Supported 00:22:38.751 Vendor Specific: Not Supported 00:22:38.751 Reset Timeout: 15000 ms 00:22:38.751 Doorbell Stride: 4 bytes 00:22:38.751 NVM Subsystem Reset: Not Supported 00:22:38.751 Command Sets Supported 00:22:38.751 NVM Command Set: Supported 00:22:38.751 Boot Partition: Not Supported 00:22:38.751 Memory Page Size Minimum: 4096 bytes 00:22:38.751 Memory Page Size Maximum: 4096 bytes 00:22:38.752 Persistent Memory Region: Not Supported 00:22:38.752 Optional Asynchronous Events Supported 00:22:38.752 Namespace Attribute Notices: Not Supported 00:22:38.752 Firmware Activation Notices: Not Supported 00:22:38.752 ANA Change Notices: Not Supported 00:22:38.752 PLE Aggregate Log Change Notices: Not Supported 00:22:38.752 LBA Status Info Alert Notices: Not Supported 00:22:38.752 EGE Aggregate Log Change Notices: Not Supported 00:22:38.752 Normal NVM Subsystem Shutdown event: Not Supported 00:22:38.752 Zone Descriptor Change Notices: Not Supported 00:22:38.752 Discovery Log Change Notices: Supported 00:22:38.752 Controller Attributes 00:22:38.752 128-bit Host Identifier: Not Supported 00:22:38.752 Non-Operational Permissive Mode: Not Supported 00:22:38.752 NVM Sets: Not Supported 00:22:38.752 Read Recovery Levels: Not Supported 00:22:38.752 Endurance Groups: Not Supported 00:22:38.752 Predictable Latency Mode: Not Supported 00:22:38.752 Traffic Based Keep ALive: Not Supported 00:22:38.752 Namespace Granularity: Not Supported 00:22:38.752 SQ Associations: Not Supported 00:22:38.752 UUID List: Not Supported 00:22:38.752 Multi-Domain Subsystem: Not Supported 00:22:38.752 Fixed Capacity Management: Not Supported 00:22:38.752 Variable Capacity Management: Not Supported 00:22:38.752 Delete Endurance Group: Not Supported 00:22:38.752 Delete NVM Set: Not Supported 00:22:38.752 Extended LBA Formats Supported: Not Supported 00:22:38.752 Flexible Data Placement Supported: Not Supported 00:22:38.752 00:22:38.752 Controller Memory Buffer Support 00:22:38.752 ================================ 00:22:38.752 Supported: No 00:22:38.752 00:22:38.752 Persistent Memory Region Support 00:22:38.752 ================================ 00:22:38.752 Supported: No 00:22:38.752 00:22:38.752 Admin Command Set Attributes 00:22:38.752 ============================ 00:22:38.752 Security Send/Receive: Not Supported 00:22:38.752 Format NVM: Not Supported 00:22:38.752 Firmware Activate/Download: Not Supported 00:22:38.752 Namespace Management: Not Supported 00:22:38.752 Device Self-Test: Not Supported 00:22:38.752 Directives: Not Supported 00:22:38.752 NVMe-MI: Not Supported 00:22:38.752 Virtualization Management: Not Supported 00:22:38.752 Doorbell Buffer Config: Not Supported 00:22:38.752 Get LBA Status Capability: Not Supported 00:22:38.752 Command & Feature Lockdown Capability: Not Supported 00:22:38.752 Abort Command Limit: 1 00:22:38.752 Async Event Request Limit: 4 00:22:38.752 Number of Firmware Slots: N/A 00:22:38.752 Firmware Slot 1 Read-Only: N/A 00:22:38.752 Firmware Activation Without Reset: N/A 00:22:38.752 Multiple Update Detection Support: N/A 00:22:38.752 Firmware Update Granularity: No Information Provided 00:22:38.752 Per-Namespace SMART Log: No 00:22:38.752 Asymmetric Namespace Access Log Page: Not Supported 00:22:38.752 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:38.752 Command Effects Log Page: Not Supported 00:22:38.752 Get Log Page Extended Data: Supported 00:22:38.752 Telemetry Log Pages: Not Supported 00:22:38.752 Persistent Event Log Pages: Not Supported 00:22:38.752 Supported Log Pages Log Page: May Support 00:22:38.752 Commands Supported & Effects Log Page: Not Supported 00:22:38.752 Feature Identifiers & Effects Log Page:May Support 00:22:38.752 NVMe-MI Commands & Effects Log Page: May Support 00:22:38.752 Data Area 4 for Telemetry Log: Not Supported 00:22:38.752 Error Log Page Entries Supported: 128 00:22:38.752 Keep Alive: Not Supported 00:22:38.752 00:22:38.752 NVM Command Set Attributes 00:22:38.752 ========================== 00:22:38.752 Submission Queue Entry Size 00:22:38.752 Max: 1 00:22:38.752 Min: 1 00:22:38.752 Completion Queue Entry Size 00:22:38.752 Max: 1 00:22:38.752 Min: 1 00:22:38.752 Number of Namespaces: 0 00:22:38.752 Compare Command: Not Supported 00:22:38.752 Write Uncorrectable Command: Not Supported 00:22:38.752 Dataset Management Command: Not Supported 00:22:38.752 Write Zeroes Command: Not Supported 00:22:38.752 Set Features Save Field: Not Supported 00:22:38.752 Reservations: Not Supported 00:22:38.752 Timestamp: Not Supported 00:22:38.752 Copy: Not Supported 00:22:38.752 Volatile Write Cache: Not Present 00:22:38.752 Atomic Write Unit (Normal): 1 00:22:38.752 Atomic Write Unit (PFail): 1 00:22:38.752 Atomic Compare & Write Unit: 1 00:22:38.752 Fused Compare & Write: Supported 00:22:38.752 Scatter-Gather List 00:22:38.752 SGL Command Set: Supported 00:22:38.752 SGL Keyed: Supported 00:22:38.752 SGL Bit Bucket Descriptor: Not Supported 00:22:38.752 SGL Metadata Pointer: Not Supported 00:22:38.752 Oversized SGL: Not Supported 00:22:38.752 SGL Metadata Address: Not Supported 00:22:38.752 SGL Offset: Supported 00:22:38.752 Transport SGL Data Block: Not Supported 00:22:38.752 Replay Protected Memory Block: Not Supported 00:22:38.752 00:22:38.752 Firmware Slot Information 00:22:38.752 ========================= 00:22:38.752 Active slot: 0 00:22:38.752 00:22:38.752 00:22:38.752 Error Log 00:22:38.752 ========= 00:22:38.752 00:22:38.752 Active Namespaces 00:22:38.752 ================= 00:22:38.752 Discovery Log Page 00:22:38.752 ================== 00:22:38.752 Generation Counter: 2 00:22:38.752 Number of Records: 2 00:22:38.752 Record Format: 0 00:22:38.752 00:22:38.752 Discovery Log Entry 0 00:22:38.752 ---------------------- 00:22:38.752 Transport Type: 3 (TCP) 00:22:38.752 Address Family: 1 (IPv4) 00:22:38.752 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:38.752 Entry Flags: 00:22:38.752 Duplicate Returned Information: 1 00:22:38.752 Explicit Persistent Connection Support for Discovery: 1 00:22:38.752 Transport Requirements: 00:22:38.752 Secure Channel: Not Required 00:22:38.752 Port ID: 0 (0x0000) 00:22:38.752 Controller ID: 65535 (0xffff) 00:22:38.752 Admin Max SQ Size: 128 00:22:38.752 Transport Service Identifier: 4420 00:22:38.752 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:38.752 Transport Address: 10.0.0.2 00:22:38.752 Discovery Log Entry 1 00:22:38.752 ---------------------- 00:22:38.752 Transport Type: 3 (TCP) 00:22:38.752 Address Family: 1 (IPv4) 00:22:38.752 Subsystem Type: 2 (NVM Subsystem) 00:22:38.752 Entry Flags: 00:22:38.752 Duplicate Returned Information: 0 00:22:38.752 Explicit Persistent Connection Support for Discovery: 0 00:22:38.752 Transport Requirements: 00:22:38.752 Secure Channel: Not Required 00:22:38.752 Port ID: 0 (0x0000) 00:22:38.752 Controller ID: 65535 (0xffff) 00:22:38.752 Admin Max SQ Size: 128 00:22:38.752 Transport Service Identifier: 4420 00:22:38.752 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:38.752 Transport Address: 10.0.0.2 [2024-12-06 11:24:11.554158] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:38.752 [2024-12-06 11:24:11.554169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc100) on tqpair=0x1c5a550 00:22:38.752 [2024-12-06 11:24:11.554175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.752 [2024-12-06 11:24:11.554179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc280) on tqpair=0x1c5a550 00:22:38.752 [2024-12-06 11:24:11.554183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.752 [2024-12-06 11:24:11.554186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc400) on tqpair=0x1c5a550 00:22:38.752 [2024-12-06 11:24:11.554191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.752 [2024-12-06 11:24:11.554195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.752 [2024-12-06 11:24:11.554199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.752 [2024-12-06 11:24:11.554208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.752 [2024-12-06 11:24:11.554212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.752 [2024-12-06 11:24:11.554214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.752 [2024-12-06 11:24:11.554221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.752 [2024-12-06 11:24:11.554234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.752 [2024-12-06 11:24:11.554317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.752 [2024-12-06 11:24:11.554323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.752 [2024-12-06 11:24:11.554326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.752 [2024-12-06 11:24:11.554329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.752 [2024-12-06 11:24:11.554334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.752 [2024-12-06 11:24:11.554337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.554345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.554356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.554440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.554445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.554448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.554455] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:38.753 [2024-12-06 11:24:11.554458] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:38.753 [2024-12-06 11:24:11.554465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554471] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.554476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.554485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.554561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.554566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.554569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.554580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.554592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.554601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.554676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.554681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.554684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.554694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.554705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.554713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.554772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.554776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.554779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.554789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.554800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.554808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.554863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.554868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.554870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.554880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.554891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.554900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.554966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.554971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.554974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.554984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.554990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.554995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.555005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.555078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.555083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.555086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.555096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.555106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.555115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.555178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.555183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.555185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.555195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.555206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.555214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.555269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.555274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.555276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.555287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.555297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.555306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.555363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.555368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.555371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.555381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.555391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.555402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.555463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.555468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.753 [2024-12-06 11:24:11.555470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.753 [2024-12-06 11:24:11.555480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.753 [2024-12-06 11:24:11.555486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.753 [2024-12-06 11:24:11.555491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.753 [2024-12-06 11:24:11.555499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.753 [2024-12-06 11:24:11.555558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.753 [2024-12-06 11:24:11.555563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.555565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.555575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.754 [2024-12-06 11:24:11.555586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.754 [2024-12-06 11:24:11.555594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.754 [2024-12-06 11:24:11.555649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.754 [2024-12-06 11:24:11.555654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.555656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.555667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.754 [2024-12-06 11:24:11.555678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.754 [2024-12-06 11:24:11.555687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.754 [2024-12-06 11:24:11.555744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.754 [2024-12-06 11:24:11.555749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.555751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.555761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.754 [2024-12-06 11:24:11.555772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.754 [2024-12-06 11:24:11.555780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.754 [2024-12-06 11:24:11.555835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.754 [2024-12-06 11:24:11.555840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.555843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.555853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.754 [2024-12-06 11:24:11.555864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.754 [2024-12-06 11:24:11.555872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.754 [2024-12-06 11:24:11.555932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.754 [2024-12-06 11:24:11.555937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.555940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.555949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.555955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.754 [2024-12-06 11:24:11.555960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.754 [2024-12-06 11:24:11.555968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.754 [2024-12-06 11:24:11.556025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.754 [2024-12-06 11:24:11.556030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.556033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.556042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.754 [2024-12-06 11:24:11.556053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.754 [2024-12-06 11:24:11.556065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.754 [2024-12-06 11:24:11.556123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.754 [2024-12-06 11:24:11.556128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.556130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.556140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.754 [2024-12-06 11:24:11.556151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.754 [2024-12-06 11:24:11.556160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.754 [2024-12-06 11:24:11.556218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.754 [2024-12-06 11:24:11.556225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.556227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.556238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.754 [2024-12-06 11:24:11.556249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.754 [2024-12-06 11:24:11.556257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.754 [2024-12-06 11:24:11.556313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.754 [2024-12-06 11:24:11.556318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.556321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.556331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.754 [2024-12-06 11:24:11.556342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.754 [2024-12-06 11:24:11.556350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.754 [2024-12-06 11:24:11.556405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.754 [2024-12-06 11:24:11.556410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.754 [2024-12-06 11:24:11.556413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.754 [2024-12-06 11:24:11.556423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.754 [2024-12-06 11:24:11.556426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.556434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.556443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.556499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.556504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.556506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.556517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.556528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.556536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.556589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.556594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.556598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.556608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.556619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.556627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.556685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.556690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.556692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.556703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.556713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.556722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.556777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.556782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.556784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.556795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.556806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.556814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.556872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.556877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.556880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.556890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.556900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.556909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.556966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.556971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.556974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.556986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.556992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.556997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.557005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.557075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.557080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.557083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.557094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.557105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.557113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.557183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.557189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.557191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.557201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.557212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.557221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.557276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.557281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.557284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.557294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.557305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.557313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.557371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.557376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.557379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.557390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.557401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.557410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.557465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.557470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.557473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.557483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.755 [2024-12-06 11:24:11.557493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.755 [2024-12-06 11:24:11.557502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.755 [2024-12-06 11:24:11.557564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.755 [2024-12-06 11:24:11.557569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.755 [2024-12-06 11:24:11.557571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.755 [2024-12-06 11:24:11.557575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.755 [2024-12-06 11:24:11.557582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.756 [2024-12-06 11:24:11.557593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.756 [2024-12-06 11:24:11.557602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.756 [2024-12-06 11:24:11.557655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.756 [2024-12-06 11:24:11.557660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.756 [2024-12-06 11:24:11.557663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.756 [2024-12-06 11:24:11.557673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.756 [2024-12-06 11:24:11.557684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.756 [2024-12-06 11:24:11.557692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.756 [2024-12-06 11:24:11.557753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.756 [2024-12-06 11:24:11.557758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.756 [2024-12-06 11:24:11.557761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.756 [2024-12-06 11:24:11.557771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.756 [2024-12-06 11:24:11.557783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.756 [2024-12-06 11:24:11.557791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.756 [2024-12-06 11:24:11.557853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.756 [2024-12-06 11:24:11.557858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.756 [2024-12-06 11:24:11.557860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.756 [2024-12-06 11:24:11.557871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.756 [2024-12-06 11:24:11.557881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.756 [2024-12-06 11:24:11.557891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.756 [2024-12-06 11:24:11.557948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.756 [2024-12-06 11:24:11.557953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.756 [2024-12-06 11:24:11.557955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.756 [2024-12-06 11:24:11.557966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.557971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.756 [2024-12-06 11:24:11.557976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.756 [2024-12-06 11:24:11.557985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.756 [2024-12-06 11:24:11.558045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.756 [2024-12-06 11:24:11.558050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.756 [2024-12-06 11:24:11.558053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.558055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.756 [2024-12-06 11:24:11.562068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.562073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.562075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c5a550) 00:22:38.756 [2024-12-06 11:24:11.562080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.756 [2024-12-06 11:24:11.562091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbc580, cid 3, qid 0 00:22:38.756 [2024-12-06 11:24:11.562154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.756 [2024-12-06 11:24:11.562159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.756 [2024-12-06 11:24:11.562162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.562165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbc580) on tqpair=0x1c5a550 00:22:38.756 [2024-12-06 11:24:11.562171] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:38.756 00:22:38.756 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:38.756 [2024-12-06 11:24:11.599573] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:22:38.756 [2024-12-06 11:24:11.599606] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806106 ] 00:22:38.756 [2024-12-06 11:24:11.638028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:38.756 [2024-12-06 11:24:11.638074] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:38.756 [2024-12-06 11:24:11.638080] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:38.756 [2024-12-06 11:24:11.638091] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:38.756 [2024-12-06 11:24:11.638099] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:38.756 [2024-12-06 11:24:11.638515] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:38.756 [2024-12-06 11:24:11.638540] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x765550 0 00:22:38.756 [2024-12-06 11:24:11.645066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:38.756 [2024-12-06 11:24:11.645079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:38.756 [2024-12-06 11:24:11.645083] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:38.756 [2024-12-06 11:24:11.645085] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:38.756 [2024-12-06 11:24:11.645110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.645114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.645117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.756 [2024-12-06 11:24:11.645126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:38.756 [2024-12-06 11:24:11.645142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.756 [2024-12-06 11:24:11.653067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.756 [2024-12-06 11:24:11.653075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.756 [2024-12-06 11:24:11.653078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.653081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.756 [2024-12-06 11:24:11.653091] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:38.756 [2024-12-06 11:24:11.653096] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:38.756 [2024-12-06 11:24:11.653101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:38.756 [2024-12-06 11:24:11.653110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.653113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.653116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.756 [2024-12-06 11:24:11.653122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.756 [2024-12-06 11:24:11.653134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.756 [2024-12-06 11:24:11.653301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.756 [2024-12-06 11:24:11.653307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.756 [2024-12-06 11:24:11.653310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.653313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.756 [2024-12-06 11:24:11.653316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:38.756 [2024-12-06 11:24:11.653322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:38.756 [2024-12-06 11:24:11.653328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.653331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.756 [2024-12-06 11:24:11.653334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.756 [2024-12-06 11:24:11.653339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.756 [2024-12-06 11:24:11.653348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.756 [2024-12-06 11:24:11.653449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.756 [2024-12-06 11:24:11.653454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.757 [2024-12-06 11:24:11.653456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.757 [2024-12-06 11:24:11.653463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:38.757 [2024-12-06 11:24:11.653469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:38.757 [2024-12-06 11:24:11.653475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.653486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.757 [2024-12-06 11:24:11.653494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.757 [2024-12-06 11:24:11.653551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.757 [2024-12-06 11:24:11.653556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.757 [2024-12-06 11:24:11.653558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.757 [2024-12-06 11:24:11.653565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:38.757 [2024-12-06 11:24:11.653572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.653584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.757 [2024-12-06 11:24:11.653592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.757 [2024-12-06 11:24:11.653645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.757 [2024-12-06 11:24:11.653651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.757 [2024-12-06 11:24:11.653653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.757 [2024-12-06 11:24:11.653661] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:38.757 [2024-12-06 11:24:11.653665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:38.757 [2024-12-06 11:24:11.653672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:38.757 [2024-12-06 11:24:11.653779] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:38.757 [2024-12-06 11:24:11.653783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:38.757 [2024-12-06 11:24:11.653789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.653800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.757 [2024-12-06 11:24:11.653809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.757 [2024-12-06 11:24:11.653873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.757 [2024-12-06 11:24:11.653878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.757 [2024-12-06 11:24:11.653881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.757 [2024-12-06 11:24:11.653887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:38.757 [2024-12-06 11:24:11.653894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.653905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.757 [2024-12-06 11:24:11.653914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.757 [2024-12-06 11:24:11.653974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.757 [2024-12-06 11:24:11.653979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.757 [2024-12-06 11:24:11.653981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.653984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.757 [2024-12-06 11:24:11.653988] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:38.757 [2024-12-06 11:24:11.653991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:38.757 [2024-12-06 11:24:11.653997] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:38.757 [2024-12-06 11:24:11.654003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:38.757 [2024-12-06 11:24:11.654010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.654018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.757 [2024-12-06 11:24:11.654030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.757 [2024-12-06 11:24:11.654129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.757 [2024-12-06 11:24:11.654134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.757 [2024-12-06 11:24:11.654137] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654140] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x765550): datao=0, datal=4096, cccid=0 00:22:38.757 [2024-12-06 11:24:11.654144] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c7100) on tqpair(0x765550): expected_datao=0, payload_size=4096 00:22:38.757 [2024-12-06 11:24:11.654147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654153] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654156] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.757 [2024-12-06 11:24:11.654179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.757 [2024-12-06 11:24:11.654182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.757 [2024-12-06 11:24:11.654190] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:38.757 [2024-12-06 11:24:11.654196] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:38.757 [2024-12-06 11:24:11.654200] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:38.757 [2024-12-06 11:24:11.654203] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:38.757 [2024-12-06 11:24:11.654207] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:38.757 [2024-12-06 11:24:11.654210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:38.757 [2024-12-06 11:24:11.654216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:38.757 [2024-12-06 11:24:11.654222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.654233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.757 [2024-12-06 11:24:11.654242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.757 [2024-12-06 11:24:11.654327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.757 [2024-12-06 11:24:11.654332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.757 [2024-12-06 11:24:11.654335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.757 [2024-12-06 11:24:11.654342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.654353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.757 [2024-12-06 11:24:11.654357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.654369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.757 [2024-12-06 11:24:11.654373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.654383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.757 [2024-12-06 11:24:11.654387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.757 [2024-12-06 11:24:11.654393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.757 [2024-12-06 11:24:11.654397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.758 [2024-12-06 11:24:11.654401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.654410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.654415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x765550) 00:22:38.758 [2024-12-06 11:24:11.654423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.758 [2024-12-06 11:24:11.654432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7100, cid 0, qid 0 00:22:38.758 [2024-12-06 11:24:11.654437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7280, cid 1, qid 0 00:22:38.758 [2024-12-06 11:24:11.654440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7400, cid 2, qid 0 00:22:38.758 [2024-12-06 11:24:11.654444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.758 [2024-12-06 11:24:11.654447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7700, cid 4, qid 0 00:22:38.758 [2024-12-06 11:24:11.654536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.758 [2024-12-06 11:24:11.654542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.758 [2024-12-06 11:24:11.654544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7700) on tqpair=0x765550 00:22:38.758 [2024-12-06 11:24:11.654550] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:38.758 [2024-12-06 11:24:11.654554] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.654561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.654566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.654571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x765550) 00:22:38.758 [2024-12-06 11:24:11.654583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.758 [2024-12-06 11:24:11.654592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7700, cid 4, qid 0 00:22:38.758 [2024-12-06 11:24:11.654678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.758 [2024-12-06 11:24:11.654683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.758 [2024-12-06 11:24:11.654685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7700) on tqpair=0x765550 00:22:38.758 [2024-12-06 11:24:11.654735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.654744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.654750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x765550) 00:22:38.758 [2024-12-06 11:24:11.654758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.758 [2024-12-06 11:24:11.654767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7700, cid 4, qid 0 00:22:38.758 [2024-12-06 11:24:11.654835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.758 [2024-12-06 11:24:11.654840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.758 [2024-12-06 11:24:11.654843] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654845] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x765550): datao=0, datal=4096, cccid=4 00:22:38.758 [2024-12-06 11:24:11.654849] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c7700) on tqpair(0x765550): expected_datao=0, payload_size=4096 00:22:38.758 [2024-12-06 11:24:11.654852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654857] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654860] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.758 [2024-12-06 11:24:11.654885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.758 [2024-12-06 11:24:11.654887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7700) on tqpair=0x765550 00:22:38.758 [2024-12-06 11:24:11.654897] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:38.758 [2024-12-06 11:24:11.654909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.654916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.654921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.654924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x765550) 00:22:38.758 [2024-12-06 11:24:11.654929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.758 [2024-12-06 11:24:11.654938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7700, cid 4, qid 0 00:22:38.758 [2024-12-06 11:24:11.655014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.758 [2024-12-06 11:24:11.655019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.758 [2024-12-06 11:24:11.655021] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655026] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x765550): datao=0, datal=4096, cccid=4 00:22:38.758 [2024-12-06 11:24:11.655029] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c7700) on tqpair(0x765550): expected_datao=0, payload_size=4096 00:22:38.758 [2024-12-06 11:24:11.655033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655038] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655041] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.758 [2024-12-06 11:24:11.655087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.758 [2024-12-06 11:24:11.655089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7700) on tqpair=0x765550 00:22:38.758 [2024-12-06 11:24:11.655103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.655110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.655116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x765550) 00:22:38.758 [2024-12-06 11:24:11.655124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.758 [2024-12-06 11:24:11.655133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7700, cid 4, qid 0 00:22:38.758 [2024-12-06 11:24:11.655205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.758 [2024-12-06 11:24:11.655210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.758 [2024-12-06 11:24:11.655213] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655215] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x765550): datao=0, datal=4096, cccid=4 00:22:38.758 [2024-12-06 11:24:11.655219] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c7700) on tqpair(0x765550): expected_datao=0, payload_size=4096 00:22:38.758 [2024-12-06 11:24:11.655222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655227] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655230] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.758 [2024-12-06 11:24:11.655288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.758 [2024-12-06 11:24:11.655291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.758 [2024-12-06 11:24:11.655293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7700) on tqpair=0x765550 00:22:38.758 [2024-12-06 11:24:11.655299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.655306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.655312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:38.758 [2024-12-06 11:24:11.655319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:38.759 [2024-12-06 11:24:11.655323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:38.759 [2024-12-06 11:24:11.655328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:38.759 [2024-12-06 11:24:11.655333] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:38.759 [2024-12-06 11:24:11.655337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:38.759 [2024-12-06 11:24:11.655341] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:38.759 [2024-12-06 11:24:11.655352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x765550) 00:22:38.759 [2024-12-06 11:24:11.655360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.759 [2024-12-06 11:24:11.655365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x765550) 00:22:38.759 [2024-12-06 11:24:11.655375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.759 [2024-12-06 11:24:11.655386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7700, cid 4, qid 0 00:22:38.759 [2024-12-06 11:24:11.655390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7880, cid 5, qid 0 00:22:38.759 [2024-12-06 11:24:11.655505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.759 [2024-12-06 11:24:11.655510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.759 [2024-12-06 11:24:11.655513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7700) on tqpair=0x765550 00:22:38.759 [2024-12-06 11:24:11.655520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.759 [2024-12-06 11:24:11.655525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.759 [2024-12-06 11:24:11.655528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7880) on tqpair=0x765550 00:22:38.759 [2024-12-06 11:24:11.655538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x765550) 00:22:38.759 [2024-12-06 11:24:11.655546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.759 [2024-12-06 11:24:11.655555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7880, cid 5, qid 0 00:22:38.759 [2024-12-06 11:24:11.655654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.759 [2024-12-06 11:24:11.655659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.759 [2024-12-06 11:24:11.655662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7880) on tqpair=0x765550 00:22:38.759 [2024-12-06 11:24:11.655672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x765550) 00:22:38.759 [2024-12-06 11:24:11.655680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.759 [2024-12-06 11:24:11.655688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7880, cid 5, qid 0 00:22:38.759 [2024-12-06 11:24:11.655745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.759 [2024-12-06 11:24:11.655751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.759 [2024-12-06 11:24:11.655755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7880) on tqpair=0x765550 00:22:38.759 [2024-12-06 11:24:11.655765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x765550) 00:22:38.759 [2024-12-06 11:24:11.655773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.759 [2024-12-06 11:24:11.655782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7880, cid 5, qid 0 00:22:38.759 [2024-12-06 11:24:11.655856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.759 [2024-12-06 11:24:11.655861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.759 [2024-12-06 11:24:11.655863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7880) on tqpair=0x765550 00:22:38.759 [2024-12-06 11:24:11.655879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x765550) 00:22:38.759 [2024-12-06 11:24:11.655887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.759 [2024-12-06 11:24:11.655893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x765550) 00:22:38.759 [2024-12-06 11:24:11.655900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.759 [2024-12-06 11:24:11.655906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x765550) 00:22:38.759 [2024-12-06 11:24:11.655913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.759 [2024-12-06 11:24:11.655919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.655922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x765550) 00:22:38.759 [2024-12-06 11:24:11.655926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.759 [2024-12-06 11:24:11.655936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7880, cid 5, qid 0 00:22:38.759 [2024-12-06 11:24:11.655940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7700, cid 4, qid 0 00:22:38.759 [2024-12-06 11:24:11.655943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7a00, cid 6, qid 0 00:22:38.759 [2024-12-06 11:24:11.655947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7b80, cid 7, qid 0 00:22:38.759 [2024-12-06 11:24:11.656097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.759 [2024-12-06 11:24:11.656102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.759 [2024-12-06 11:24:11.656105] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656108] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x765550): datao=0, datal=8192, cccid=5 00:22:38.759 [2024-12-06 11:24:11.656111] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c7880) on tqpair(0x765550): expected_datao=0, payload_size=8192 00:22:38.759 [2024-12-06 11:24:11.656114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656138] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656143] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.759 [2024-12-06 11:24:11.656151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.759 [2024-12-06 11:24:11.656154] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656157] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x765550): datao=0, datal=512, cccid=4 00:22:38.759 [2024-12-06 11:24:11.656160] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c7700) on tqpair(0x765550): expected_datao=0, payload_size=512 00:22:38.759 [2024-12-06 11:24:11.656163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656168] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656171] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.759 [2024-12-06 11:24:11.656179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.759 [2024-12-06 11:24:11.656182] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656184] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x765550): datao=0, datal=512, cccid=6 00:22:38.759 [2024-12-06 11:24:11.656188] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c7a00) on tqpair(0x765550): expected_datao=0, payload_size=512 00:22:38.759 [2024-12-06 11:24:11.656191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656196] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656198] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.759 [2024-12-06 11:24:11.656207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.759 [2024-12-06 11:24:11.656209] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656212] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x765550): datao=0, datal=4096, cccid=7 00:22:38.759 [2024-12-06 11:24:11.656215] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c7b80) on tqpair(0x765550): expected_datao=0, payload_size=4096 00:22:38.759 [2024-12-06 11:24:11.656218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656223] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656226] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.759 [2024-12-06 11:24:11.656240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.759 [2024-12-06 11:24:11.656243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.759 [2024-12-06 11:24:11.656246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7880) on tqpair=0x765550 00:22:38.759 [2024-12-06 11:24:11.656255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.759 [2024-12-06 11:24:11.656259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.760 [2024-12-06 11:24:11.656262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.760 [2024-12-06 11:24:11.656264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7700) on tqpair=0x765550 00:22:38.760 [2024-12-06 11:24:11.656272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.760 [2024-12-06 11:24:11.656276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.760 [2024-12-06 11:24:11.656279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.760 [2024-12-06 11:24:11.656282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7a00) on tqpair=0x765550 00:22:38.760 [2024-12-06 11:24:11.656287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.760 [2024-12-06 11:24:11.656291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.760 [2024-12-06 11:24:11.656294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.760 [2024-12-06 11:24:11.656298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7b80) on tqpair=0x765550 00:22:38.760 ===================================================== 00:22:38.760 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.760 ===================================================== 00:22:38.760 Controller Capabilities/Features 00:22:38.760 ================================ 00:22:38.760 Vendor ID: 8086 00:22:38.760 Subsystem Vendor ID: 8086 00:22:38.760 Serial Number: SPDK00000000000001 00:22:38.760 Model Number: SPDK bdev Controller 00:22:38.760 Firmware Version: 25.01 00:22:38.760 Recommended Arb Burst: 6 00:22:38.760 IEEE OUI Identifier: e4 d2 5c 00:22:38.760 Multi-path I/O 00:22:38.760 May have multiple subsystem ports: Yes 00:22:38.760 May have multiple controllers: Yes 00:22:38.760 Associated with SR-IOV VF: No 00:22:38.760 Max Data Transfer Size: 131072 00:22:38.760 Max Number of Namespaces: 32 00:22:38.760 Max Number of I/O Queues: 127 00:22:38.760 NVMe Specification Version (VS): 1.3 00:22:38.760 NVMe Specification Version (Identify): 1.3 00:22:38.760 Maximum Queue Entries: 128 00:22:38.760 Contiguous Queues Required: Yes 00:22:38.760 Arbitration Mechanisms Supported 00:22:38.760 Weighted Round Robin: Not Supported 00:22:38.760 Vendor Specific: Not Supported 00:22:38.760 Reset Timeout: 15000 ms 00:22:38.760 Doorbell Stride: 4 bytes 00:22:38.760 NVM Subsystem Reset: Not Supported 00:22:38.760 Command Sets Supported 00:22:38.760 NVM Command Set: Supported 00:22:38.760 Boot Partition: Not Supported 00:22:38.760 Memory Page Size Minimum: 4096 bytes 00:22:38.760 Memory Page Size Maximum: 4096 bytes 00:22:38.760 Persistent Memory Region: Not Supported 00:22:38.760 Optional Asynchronous Events Supported 00:22:38.760 Namespace Attribute Notices: Supported 00:22:38.760 Firmware Activation Notices: Not Supported 00:22:38.760 ANA Change Notices: Not Supported 00:22:38.760 PLE Aggregate Log Change Notices: Not Supported 00:22:38.760 LBA Status Info Alert Notices: Not Supported 00:22:38.760 EGE Aggregate Log Change Notices: Not Supported 00:22:38.760 Normal NVM Subsystem Shutdown event: Not Supported 00:22:38.760 Zone Descriptor Change Notices: Not Supported 00:22:38.760 Discovery Log Change Notices: Not Supported 00:22:38.760 Controller Attributes 00:22:38.760 128-bit Host Identifier: Supported 00:22:38.760 Non-Operational Permissive Mode: Not Supported 00:22:38.760 NVM Sets: Not Supported 00:22:38.760 Read Recovery Levels: Not Supported 00:22:38.760 Endurance Groups: Not Supported 00:22:38.760 Predictable Latency Mode: Not Supported 00:22:38.760 Traffic Based Keep ALive: Not Supported 00:22:38.760 Namespace Granularity: Not Supported 00:22:38.760 SQ Associations: Not Supported 00:22:38.760 UUID List: Not Supported 00:22:38.760 Multi-Domain Subsystem: Not Supported 00:22:38.760 Fixed Capacity Management: Not Supported 00:22:38.760 Variable Capacity Management: Not Supported 00:22:38.760 Delete Endurance Group: Not Supported 00:22:38.760 Delete NVM Set: Not Supported 00:22:38.760 Extended LBA Formats Supported: Not Supported 00:22:38.760 Flexible Data Placement Supported: Not Supported 00:22:38.760 00:22:38.760 Controller Memory Buffer Support 00:22:38.760 ================================ 00:22:38.760 Supported: No 00:22:38.760 00:22:38.760 Persistent Memory Region Support 00:22:38.760 ================================ 00:22:38.760 Supported: No 00:22:38.760 00:22:38.760 Admin Command Set Attributes 00:22:38.760 ============================ 00:22:38.760 Security Send/Receive: Not Supported 00:22:38.760 Format NVM: Not Supported 00:22:38.760 Firmware Activate/Download: Not Supported 00:22:38.760 Namespace Management: Not Supported 00:22:38.760 Device Self-Test: Not Supported 00:22:38.760 Directives: Not Supported 00:22:38.760 NVMe-MI: Not Supported 00:22:38.760 Virtualization Management: Not Supported 00:22:38.760 Doorbell Buffer Config: Not Supported 00:22:38.760 Get LBA Status Capability: Not Supported 00:22:38.760 Command & Feature Lockdown Capability: Not Supported 00:22:38.760 Abort Command Limit: 4 00:22:38.760 Async Event Request Limit: 4 00:22:38.760 Number of Firmware Slots: N/A 00:22:38.760 Firmware Slot 1 Read-Only: N/A 00:22:38.760 Firmware Activation Without Reset: N/A 00:22:38.760 Multiple Update Detection Support: N/A 00:22:38.760 Firmware Update Granularity: No Information Provided 00:22:38.760 Per-Namespace SMART Log: No 00:22:38.760 Asymmetric Namespace Access Log Page: Not Supported 00:22:38.760 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:38.760 Command Effects Log Page: Supported 00:22:38.760 Get Log Page Extended Data: Supported 00:22:38.760 Telemetry Log Pages: Not Supported 00:22:38.760 Persistent Event Log Pages: Not Supported 00:22:38.760 Supported Log Pages Log Page: May Support 00:22:38.760 Commands Supported & Effects Log Page: Not Supported 00:22:38.760 Feature Identifiers & Effects Log Page:May Support 00:22:38.760 NVMe-MI Commands & Effects Log Page: May Support 00:22:38.760 Data Area 4 for Telemetry Log: Not Supported 00:22:38.760 Error Log Page Entries Supported: 128 00:22:38.760 Keep Alive: Supported 00:22:38.760 Keep Alive Granularity: 10000 ms 00:22:38.760 00:22:38.760 NVM Command Set Attributes 00:22:38.760 ========================== 00:22:38.760 Submission Queue Entry Size 00:22:38.760 Max: 64 00:22:38.760 Min: 64 00:22:38.760 Completion Queue Entry Size 00:22:38.760 Max: 16 00:22:38.760 Min: 16 00:22:38.760 Number of Namespaces: 32 00:22:38.760 Compare Command: Supported 00:22:38.760 Write Uncorrectable Command: Not Supported 00:22:38.760 Dataset Management Command: Supported 00:22:38.760 Write Zeroes Command: Supported 00:22:38.760 Set Features Save Field: Not Supported 00:22:38.760 Reservations: Supported 00:22:38.760 Timestamp: Not Supported 00:22:38.760 Copy: Supported 00:22:38.760 Volatile Write Cache: Present 00:22:38.760 Atomic Write Unit (Normal): 1 00:22:38.760 Atomic Write Unit (PFail): 1 00:22:38.760 Atomic Compare & Write Unit: 1 00:22:38.760 Fused Compare & Write: Supported 00:22:38.760 Scatter-Gather List 00:22:38.760 SGL Command Set: Supported 00:22:38.760 SGL Keyed: Supported 00:22:38.760 SGL Bit Bucket Descriptor: Not Supported 00:22:38.760 SGL Metadata Pointer: Not Supported 00:22:38.760 Oversized SGL: Not Supported 00:22:38.760 SGL Metadata Address: Not Supported 00:22:38.760 SGL Offset: Supported 00:22:38.760 Transport SGL Data Block: Not Supported 00:22:38.760 Replay Protected Memory Block: Not Supported 00:22:38.760 00:22:38.760 Firmware Slot Information 00:22:38.760 ========================= 00:22:38.760 Active slot: 1 00:22:38.760 Slot 1 Firmware Revision: 25.01 00:22:38.760 00:22:38.760 00:22:38.760 Commands Supported and Effects 00:22:38.760 ============================== 00:22:38.760 Admin Commands 00:22:38.760 -------------- 00:22:38.760 Get Log Page (02h): Supported 00:22:38.760 Identify (06h): Supported 00:22:38.760 Abort (08h): Supported 00:22:38.760 Set Features (09h): Supported 00:22:38.760 Get Features (0Ah): Supported 00:22:38.760 Asynchronous Event Request (0Ch): Supported 00:22:38.760 Keep Alive (18h): Supported 00:22:38.760 I/O Commands 00:22:38.760 ------------ 00:22:38.760 Flush (00h): Supported LBA-Change 00:22:38.760 Write (01h): Supported LBA-Change 00:22:38.760 Read (02h): Supported 00:22:38.760 Compare (05h): Supported 00:22:38.760 Write Zeroes (08h): Supported LBA-Change 00:22:38.760 Dataset Management (09h): Supported LBA-Change 00:22:38.760 Copy (19h): Supported LBA-Change 00:22:38.760 00:22:38.760 Error Log 00:22:38.760 ========= 00:22:38.760 00:22:38.760 Arbitration 00:22:38.760 =========== 00:22:38.760 Arbitration Burst: 1 00:22:38.760 00:22:38.760 Power Management 00:22:38.760 ================ 00:22:38.760 Number of Power States: 1 00:22:38.760 Current Power State: Power State #0 00:22:38.760 Power State #0: 00:22:38.760 Max Power: 0.00 W 00:22:38.760 Non-Operational State: Operational 00:22:38.760 Entry Latency: Not Reported 00:22:38.761 Exit Latency: Not Reported 00:22:38.761 Relative Read Throughput: 0 00:22:38.761 Relative Read Latency: 0 00:22:38.761 Relative Write Throughput: 0 00:22:38.761 Relative Write Latency: 0 00:22:38.761 Idle Power: Not Reported 00:22:38.761 Active Power: Not Reported 00:22:38.761 Non-Operational Permissive Mode: Not Supported 00:22:38.761 00:22:38.761 Health Information 00:22:38.761 ================== 00:22:38.761 Critical Warnings: 00:22:38.761 Available Spare Space: OK 00:22:38.761 Temperature: OK 00:22:38.761 Device Reliability: OK 00:22:38.761 Read Only: No 00:22:38.761 Volatile Memory Backup: OK 00:22:38.761 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:38.761 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:38.761 Available Spare: 0% 00:22:38.761 Available Spare Threshold: 0% 00:22:38.761 Life Percentage Used:[2024-12-06 11:24:11.656369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.656373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x765550) 00:22:38.761 [2024-12-06 11:24:11.656378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.761 [2024-12-06 11:24:11.656389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7b80, cid 7, qid 0 00:22:38.761 [2024-12-06 11:24:11.660063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.761 [2024-12-06 11:24:11.660070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.761 [2024-12-06 11:24:11.660072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7b80) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660104] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:38.761 [2024-12-06 11:24:11.660112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7100) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.761 [2024-12-06 11:24:11.660121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7280) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.761 [2024-12-06 11:24:11.660129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7400) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.761 [2024-12-06 11:24:11.660136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.761 [2024-12-06 11:24:11.660146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.761 [2024-12-06 11:24:11.660158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.761 [2024-12-06 11:24:11.660170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.761 [2024-12-06 11:24:11.660345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.761 [2024-12-06 11:24:11.660350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.761 [2024-12-06 11:24:11.660352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.761 [2024-12-06 11:24:11.660371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.761 [2024-12-06 11:24:11.660382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.761 [2024-12-06 11:24:11.660493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.761 [2024-12-06 11:24:11.660498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.761 [2024-12-06 11:24:11.660502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660509] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:38.761 [2024-12-06 11:24:11.660513] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:38.761 [2024-12-06 11:24:11.660520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.761 [2024-12-06 11:24:11.660531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.761 [2024-12-06 11:24:11.660540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.761 [2024-12-06 11:24:11.660612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.761 [2024-12-06 11:24:11.660617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.761 [2024-12-06 11:24:11.660620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.761 [2024-12-06 11:24:11.660642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.761 [2024-12-06 11:24:11.660650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.761 [2024-12-06 11:24:11.660745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.761 [2024-12-06 11:24:11.660750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.761 [2024-12-06 11:24:11.660753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.761 [2024-12-06 11:24:11.660774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.761 [2024-12-06 11:24:11.660782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.761 [2024-12-06 11:24:11.660847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.761 [2024-12-06 11:24:11.660852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.761 [2024-12-06 11:24:11.660854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.761 [2024-12-06 11:24:11.660876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.761 [2024-12-06 11:24:11.660884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.761 [2024-12-06 11:24:11.660947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.761 [2024-12-06 11:24:11.660952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.761 [2024-12-06 11:24:11.660954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.660964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.660971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.761 [2024-12-06 11:24:11.660975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.761 [2024-12-06 11:24:11.660984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.761 [2024-12-06 11:24:11.661039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.761 [2024-12-06 11:24:11.661044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.761 [2024-12-06 11:24:11.661046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.661049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.661061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.661064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.661067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.761 [2024-12-06 11:24:11.661072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.761 [2024-12-06 11:24:11.661081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.761 [2024-12-06 11:24:11.661149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.761 [2024-12-06 11:24:11.661154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.761 [2024-12-06 11:24:11.661157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.661160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.761 [2024-12-06 11:24:11.661167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.661170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.761 [2024-12-06 11:24:11.661173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.661178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.661186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.661249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.661254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.661257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.661267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.661278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.661286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.661349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.661355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.661358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.661368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.661379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.661387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.661447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.661452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.661454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.661464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.661475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.661483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.661550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.661555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.661558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.661568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.661579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.661587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.661651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.661656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.661659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.661669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.661680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.661688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.661752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.661757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.661760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.661772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.661783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.661791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.661848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.661853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.661856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.661866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.661877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.661885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.661954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.661959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.661961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.661971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.661977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.661982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.661991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.662055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.662063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.662066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.662076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.662087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.662096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.662157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.662162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.662165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.662176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.662187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.662196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.662256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.662261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.662263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.662273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.762 [2024-12-06 11:24:11.662284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.762 [2024-12-06 11:24:11.662293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.762 [2024-12-06 11:24:11.662358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.762 [2024-12-06 11:24:11.662363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.762 [2024-12-06 11:24:11.662366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.762 [2024-12-06 11:24:11.662376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.762 [2024-12-06 11:24:11.662382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.662387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.662395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.662459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.662464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.662467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.662477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.662488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.662496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.662558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.662563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.662566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.662576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.662588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.662597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.662654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.662659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.662661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.662672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.662683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.662691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.662760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.662766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.662768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.662778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.662789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.662797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.662861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.662866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.662869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.662879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.662890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.662897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.662963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.662968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.662971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.662981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.662988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.662993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.663001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.663067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.663072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.663075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.663085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.663096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.663105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.663215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.663220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.663223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.663233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.663244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.663252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.663315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.663320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.663322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.663333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.663343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.763 [2024-12-06 11:24:11.663352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.763 [2024-12-06 11:24:11.663416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.763 [2024-12-06 11:24:11.663421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.763 [2024-12-06 11:24:11.663424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.763 [2024-12-06 11:24:11.663434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.763 [2024-12-06 11:24:11.663440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.763 [2024-12-06 11:24:11.663446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.764 [2024-12-06 11:24:11.663454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.764 [2024-12-06 11:24:11.663510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.764 [2024-12-06 11:24:11.663515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.764 [2024-12-06 11:24:11.663518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.764 [2024-12-06 11:24:11.663521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.764 [2024-12-06 11:24:11.663528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.764 [2024-12-06 11:24:11.663532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.764 [2024-12-06 11:24:11.663534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.764 [2024-12-06 11:24:11.663539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.764 [2024-12-06 11:24:11.663548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.764 [2024-12-06 11:24:11.667064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.764 [2024-12-06 11:24:11.667078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.764 [2024-12-06 11:24:11.667081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.764 [2024-12-06 11:24:11.667084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.764 [2024-12-06 11:24:11.667093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.764 [2024-12-06 11:24:11.667097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.764 [2024-12-06 11:24:11.667099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x765550) 00:22:38.764 [2024-12-06 11:24:11.667105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.764 [2024-12-06 11:24:11.667115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c7580, cid 3, qid 0 00:22:38.764 [2024-12-06 11:24:11.667250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.764 [2024-12-06 11:24:11.667255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.764 [2024-12-06 11:24:11.667258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.764 [2024-12-06 11:24:11.667261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c7580) on tqpair=0x765550 00:22:38.764 [2024-12-06 11:24:11.667267] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:38.764 0% 00:22:38.764 Data Units Read: 0 00:22:38.764 Data Units Written: 0 00:22:38.764 Host Read Commands: 0 00:22:38.764 Host Write Commands: 0 00:22:38.764 Controller Busy Time: 0 minutes 00:22:38.764 Power Cycles: 0 00:22:38.764 Power On Hours: 0 hours 00:22:38.764 Unsafe Shutdowns: 0 00:22:38.764 Unrecoverable Media Errors: 0 00:22:38.764 Lifetime Error Log Entries: 0 00:22:38.764 Warning Temperature Time: 0 minutes 00:22:38.764 Critical Temperature Time: 0 minutes 00:22:38.764 00:22:38.764 Number of Queues 00:22:38.764 ================ 00:22:38.764 Number of I/O Submission Queues: 127 00:22:38.764 Number of I/O Completion Queues: 127 00:22:38.764 00:22:38.764 Active Namespaces 00:22:38.764 ================= 00:22:38.764 Namespace ID:1 00:22:38.764 Error Recovery Timeout: Unlimited 00:22:38.764 Command Set Identifier: NVM (00h) 00:22:38.764 Deallocate: Supported 00:22:38.764 Deallocated/Unwritten Error: Not Supported 00:22:38.764 Deallocated Read Value: Unknown 00:22:38.764 Deallocate in Write Zeroes: Not Supported 00:22:38.764 Deallocated Guard Field: 0xFFFF 00:22:38.764 Flush: Supported 00:22:38.764 Reservation: Supported 00:22:38.764 Namespace Sharing Capabilities: Multiple Controllers 00:22:38.764 Size (in LBAs): 131072 (0GiB) 00:22:38.764 Capacity (in LBAs): 131072 (0GiB) 00:22:38.764 Utilization (in LBAs): 131072 (0GiB) 00:22:38.764 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:38.764 EUI64: ABCDEF0123456789 00:22:38.764 UUID: d8704c91-905a-48e7-a855-095e00adc563 00:22:38.764 Thin Provisioning: Not Supported 00:22:38.764 Per-NS Atomic Units: Yes 00:22:38.764 Atomic Boundary Size (Normal): 0 00:22:38.764 Atomic Boundary Size (PFail): 0 00:22:38.764 Atomic Boundary Offset: 0 00:22:38.764 Maximum Single Source Range Length: 65535 00:22:38.764 Maximum Copy Length: 65535 00:22:38.764 Maximum Source Range Count: 1 00:22:38.764 NGUID/EUI64 Never Reused: No 00:22:38.764 Namespace Write Protected: No 00:22:38.764 Number of LBA Formats: 1 00:22:38.764 Current LBA Format: LBA Format #00 00:22:38.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:38.764 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.023 rmmod nvme_tcp 00:22:39.023 rmmod nvme_fabrics 00:22:39.023 rmmod nvme_keyring 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1805825 ']' 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1805825 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1805825 ']' 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1805825 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1805825 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1805825' 00:22:39.023 killing process with pid 1805825 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1805825 00:22:39.023 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1805825 00:22:39.282 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.282 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.282 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.282 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:39.282 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:39.282 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.282 11:24:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.282 11:24:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.282 11:24:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.282 11:24:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.282 11:24:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.282 11:24:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.186 11:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.186 00:22:41.186 real 0m9.925s 00:22:41.186 user 0m7.823s 00:22:41.186 sys 0m4.865s 00:22:41.186 11:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.186 11:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:41.186 ************************************ 00:22:41.186 END TEST nvmf_identify 00:22:41.186 ************************************ 00:22:41.186 11:24:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:41.186 11:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:41.186 11:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.186 11:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.447 ************************************ 00:22:41.447 START TEST nvmf_perf 00:22:41.447 ************************************ 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:41.447 * Looking for test storage... 00:22:41.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.447 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:41.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.448 --rc genhtml_branch_coverage=1 00:22:41.448 --rc genhtml_function_coverage=1 00:22:41.448 --rc genhtml_legend=1 00:22:41.448 --rc geninfo_all_blocks=1 00:22:41.448 --rc geninfo_unexecuted_blocks=1 00:22:41.448 00:22:41.448 ' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:41.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.448 --rc genhtml_branch_coverage=1 00:22:41.448 --rc genhtml_function_coverage=1 00:22:41.448 --rc genhtml_legend=1 00:22:41.448 --rc geninfo_all_blocks=1 00:22:41.448 --rc geninfo_unexecuted_blocks=1 00:22:41.448 00:22:41.448 ' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:41.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.448 --rc genhtml_branch_coverage=1 00:22:41.448 --rc genhtml_function_coverage=1 00:22:41.448 --rc genhtml_legend=1 00:22:41.448 --rc geninfo_all_blocks=1 00:22:41.448 --rc geninfo_unexecuted_blocks=1 00:22:41.448 00:22:41.448 ' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:41.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.448 --rc genhtml_branch_coverage=1 00:22:41.448 --rc genhtml_function_coverage=1 00:22:41.448 --rc genhtml_legend=1 00:22:41.448 --rc geninfo_all_blocks=1 00:22:41.448 --rc geninfo_unexecuted_blocks=1 00:22:41.448 00:22:41.448 ' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.448 11:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.016 11:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.016 11:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.016 11:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.016 11:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.016 11:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:48.016 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.016 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:48.016 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:48.017 Found net devices under 0000:af:00.0: cvl_0_0 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:48.017 Found net devices under 0000:af:00.1: cvl_0_1 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:22:48.017 00:22:48.017 --- 10.0.0.2 ping statistics --- 00:22:48.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.017 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:22:48.017 00:22:48.017 --- 10.0.0.1 ping statistics --- 00:22:48.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.017 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1809828 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1809828 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1809828 ']' 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.017 11:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.017 [2024-12-06 11:24:20.385383] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:22:48.017 [2024-12-06 11:24:20.385431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.017 [2024-12-06 11:24:20.464854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.017 [2024-12-06 11:24:20.505617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.017 [2024-12-06 11:24:20.505649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.017 [2024-12-06 11:24:20.505656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.017 [2024-12-06 11:24:20.505662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.017 [2024-12-06 11:24:20.505667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.017 [2024-12-06 11:24:20.507221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.017 [2024-12-06 11:24:20.507250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.017 [2024-12-06 11:24:20.507275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.017 [2024-12-06 11:24:20.507276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.276 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.276 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:48.276 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.276 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.276 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.534 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.534 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:48.534 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:51.820 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:51.820 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:51.820 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:22:51.820 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:51.820 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:51.820 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:22:51.820 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:51.820 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:51.820 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.079 [2024-12-06 11:24:24.849229] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.079 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.341 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:52.341 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:52.341 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:52.341 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:52.603 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.861 [2024-12-06 11:24:25.593995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.861 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:53.119 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:22:53.119 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:22:53.120 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:53.120 11:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:22:54.496 Initializing NVMe Controllers 00:22:54.496 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:22:54.496 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:22:54.496 Initialization complete. Launching workers. 00:22:54.496 ======================================================== 00:22:54.496 Latency(us) 00:22:54.496 Device Information : IOPS MiB/s Average min max 00:22:54.496 PCIE (0000:86:00.0) NSID 1 from core 0: 105327.00 411.43 303.31 19.03 4723.86 00:22:54.496 ======================================================== 00:22:54.496 Total : 105327.00 411.43 303.31 19.03 4723.86 00:22:54.496 00:22:54.496 11:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:55.432 Initializing NVMe Controllers 00:22:55.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:55.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:55.432 Initialization complete. Launching workers. 00:22:55.433 ======================================================== 00:22:55.433 Latency(us) 00:22:55.433 Device Information : IOPS MiB/s Average min max 00:22:55.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 296.00 1.16 3498.96 116.98 45682.22 00:22:55.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19682.44 5165.83 47893.92 00:22:55.433 ======================================================== 00:22:55.433 Total : 347.00 1.36 5877.51 116.98 47893.92 00:22:55.433 00:22:55.433 11:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:56.811 Initializing NVMe Controllers 00:22:56.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:56.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:56.811 Initialization complete. Launching workers. 00:22:56.811 ======================================================== 00:22:56.811 Latency(us) 00:22:56.811 Device Information : IOPS MiB/s Average min max 00:22:56.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12317.99 48.12 2606.51 471.46 6201.01 00:22:56.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3944.00 15.41 8148.20 4268.91 15691.39 00:22:56.812 ======================================================== 00:22:56.812 Total : 16261.98 63.52 3950.53 471.46 15691.39 00:22:56.812 00:22:56.812 11:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:56.812 11:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:56.812 11:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.349 Initializing NVMe Controllers 00:22:59.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.349 Controller IO queue size 128, less than required. 00:22:59.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.349 Controller IO queue size 128, less than required. 00:22:59.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:59.349 Initialization complete. Launching workers. 00:22:59.349 ======================================================== 00:22:59.349 Latency(us) 00:22:59.349 Device Information : IOPS MiB/s Average min max 00:22:59.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1927.49 481.87 67287.89 49065.03 110861.82 00:22:59.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.54 146.64 228924.43 92803.64 349892.18 00:22:59.349 ======================================================== 00:22:59.349 Total : 2514.03 628.51 104998.80 49065.03 349892.18 00:22:59.349 00:22:59.349 11:24:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:59.349 No valid NVMe controllers or AIO or URING devices found 00:22:59.349 Initializing NVMe Controllers 00:22:59.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.349 Controller IO queue size 128, less than required. 00:22:59.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.349 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:59.349 Controller IO queue size 128, less than required. 00:22:59.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.350 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:59.350 WARNING: Some requested NVMe devices were skipped 00:22:59.350 11:24:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:01.884 Initializing NVMe Controllers 00:23:01.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.884 Controller IO queue size 128, less than required. 00:23:01.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.884 Controller IO queue size 128, less than required. 00:23:01.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:01.884 Initialization complete. Launching workers. 00:23:01.884 00:23:01.884 ==================== 00:23:01.884 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:01.884 TCP transport: 00:23:01.884 polls: 17777 00:23:01.884 idle_polls: 13957 00:23:01.884 sock_completions: 3820 00:23:01.884 nvme_completions: 6527 00:23:01.884 submitted_requests: 9834 00:23:01.884 queued_requests: 1 00:23:01.884 00:23:01.884 ==================== 00:23:01.884 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:01.884 TCP transport: 00:23:01.884 polls: 17266 00:23:01.884 idle_polls: 12903 00:23:01.884 sock_completions: 4363 00:23:01.884 nvme_completions: 7087 00:23:01.884 submitted_requests: 10546 00:23:01.884 queued_requests: 1 00:23:01.884 ======================================================== 00:23:01.884 Latency(us) 00:23:01.884 Device Information : IOPS MiB/s Average min max 00:23:01.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1631.40 407.85 80654.91 39992.49 152623.32 00:23:01.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1771.39 442.85 72698.61 46002.55 108834.00 00:23:01.884 ======================================================== 00:23:01.884 Total : 3402.79 850.70 76513.10 39992.49 152623.32 00:23:01.884 00:23:01.884 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:01.884 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.143 rmmod nvme_tcp 00:23:02.143 rmmod nvme_fabrics 00:23:02.143 rmmod nvme_keyring 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1809828 ']' 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1809828 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1809828 ']' 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1809828 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.143 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1809828 00:23:02.143 11:24:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.143 11:24:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.143 11:24:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1809828' 00:23:02.143 killing process with pid 1809828 00:23:02.143 11:24:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1809828 00:23:02.143 11:24:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1809828 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.046 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.953 00:23:05.953 real 0m24.488s 00:23:05.953 user 1m3.861s 00:23:05.953 sys 0m8.305s 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:05.953 ************************************ 00:23:05.953 END TEST nvmf_perf 00:23:05.953 ************************************ 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.953 ************************************ 00:23:05.953 START TEST nvmf_fio_host 00:23:05.953 ************************************ 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:05.953 * Looking for test storage... 00:23:05.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.953 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:06.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.213 --rc genhtml_branch_coverage=1 00:23:06.213 --rc genhtml_function_coverage=1 00:23:06.213 --rc genhtml_legend=1 00:23:06.213 --rc geninfo_all_blocks=1 00:23:06.213 --rc geninfo_unexecuted_blocks=1 00:23:06.213 00:23:06.213 ' 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:06.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.213 --rc genhtml_branch_coverage=1 00:23:06.213 --rc genhtml_function_coverage=1 00:23:06.213 --rc genhtml_legend=1 00:23:06.213 --rc geninfo_all_blocks=1 00:23:06.213 --rc geninfo_unexecuted_blocks=1 00:23:06.213 00:23:06.213 ' 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:06.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.213 --rc genhtml_branch_coverage=1 00:23:06.213 --rc genhtml_function_coverage=1 00:23:06.213 --rc genhtml_legend=1 00:23:06.213 --rc geninfo_all_blocks=1 00:23:06.213 --rc geninfo_unexecuted_blocks=1 00:23:06.213 00:23:06.213 ' 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:06.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.213 --rc genhtml_branch_coverage=1 00:23:06.213 --rc genhtml_function_coverage=1 00:23:06.213 --rc genhtml_legend=1 00:23:06.213 --rc geninfo_all_blocks=1 00:23:06.213 --rc geninfo_unexecuted_blocks=1 00:23:06.213 00:23:06.213 ' 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.213 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.214 11:24:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:12.784 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:12.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:12.784 Found net devices under 0000:af:00.0: cvl_0_0 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.784 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:12.784 Found net devices under 0000:af:00.1: cvl_0_1 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:23:12.785 00:23:12.785 --- 10.0.0.2 ping statistics --- 00:23:12.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.785 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:12.785 00:23:12.785 --- 10.0.0.1 ping statistics --- 00:23:12.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.785 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1816236 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1816236 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1816236 ']' 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.785 11:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.785 [2024-12-06 11:24:44.960979] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:23:12.785 [2024-12-06 11:24:44.961027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.785 [2024-12-06 11:24:45.038559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.785 [2024-12-06 11:24:45.079296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.785 [2024-12-06 11:24:45.079329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.785 [2024-12-06 11:24:45.079336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.785 [2024-12-06 11:24:45.079341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.785 [2024-12-06 11:24:45.079346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.785 [2024-12-06 11:24:45.080833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.785 [2024-12-06 11:24:45.080950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.785 [2024-12-06 11:24:45.081065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.785 [2024-12-06 11:24:45.081075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.785 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.785 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:12.785 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:12.785 [2024-12-06 11:24:45.338315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.785 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:12.785 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.785 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.785 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:12.785 Malloc1 00:23:12.785 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:13.043 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:13.301 11:24:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.301 [2024-12-06 11:24:46.142223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.301 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:13.558 11:24:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:13.816 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:13.816 fio-3.35 00:23:13.816 Starting 1 thread 00:23:16.347 00:23:16.347 test: (groupid=0, jobs=1): err= 0: pid=1816800: Fri Dec 6 11:24:49 2024 00:23:16.347 read: IOPS=12.9k, BW=50.4MiB/s (52.8MB/s)(101MiB/2005msec) 00:23:16.347 slat (nsec): min=1413, max=175488, avg=1551.04, stdev=1488.06 00:23:16.347 clat (usec): min=2080, max=9571, avg=5462.81, stdev=400.54 00:23:16.347 lat (usec): min=2106, max=9572, avg=5464.36, stdev=400.42 00:23:16.347 clat percentiles (usec): 00:23:16.347 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:23:16.348 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:23:16.348 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6063], 00:23:16.348 | 99.00th=[ 6325], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[ 8356], 00:23:16.348 | 99.99th=[ 9503] 00:23:16.348 bw ( KiB/s): min=50232, max=52224, per=100.00%, avg=51608.00, stdev=936.72, samples=4 00:23:16.348 iops : min=12558, max=13056, avg=12902.00, stdev=234.18, samples=4 00:23:16.348 write: IOPS=12.9k, BW=50.3MiB/s (52.8MB/s)(101MiB/2005msec); 0 zone resets 00:23:16.348 slat (nsec): min=1448, max=158354, avg=1613.93, stdev=1118.26 00:23:16.348 clat (usec): min=1651, max=9192, avg=4401.25, stdev=339.04 00:23:16.348 lat (usec): min=1662, max=9193, avg=4402.86, stdev=338.99 00:23:16.348 clat percentiles (usec): 00:23:16.348 | 1.00th=[ 3654], 5.00th=[ 3884], 10.00th=[ 4015], 20.00th=[ 4146], 00:23:16.348 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4490], 00:23:16.348 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4883], 00:23:16.348 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 7177], 99.95th=[ 7963], 00:23:16.348 | 99.99th=[ 9110] 00:23:16.348 bw ( KiB/s): min=50752, max=51800, per=100.00%, avg=51526.00, stdev=516.12, samples=4 00:23:16.348 iops : min=12688, max=12950, avg=12881.50, stdev=129.03, samples=4 00:23:16.348 lat (msec) : 2=0.02%, 4=4.77%, 10=95.21% 00:23:16.348 cpu : usr=69.96%, sys=28.94%, ctx=107, majf=0, minf=2 00:23:16.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:16.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:16.348 issued rwts: total=25868,25826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:16.348 00:23:16.348 Run status group 0 (all jobs): 00:23:16.348 READ: bw=50.4MiB/s (52.8MB/s), 50.4MiB/s-50.4MiB/s (52.8MB/s-52.8MB/s), io=101MiB (106MB), run=2005-2005msec 00:23:16.348 WRITE: bw=50.3MiB/s (52.8MB/s), 50.3MiB/s-50.3MiB/s (52.8MB/s-52.8MB/s), io=101MiB (106MB), run=2005-2005msec 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:16.348 11:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:16.606 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:16.606 fio-3.35 00:23:16.606 Starting 1 thread 00:23:19.143 00:23:19.143 test: (groupid=0, jobs=1): err= 0: pid=1817324: Fri Dec 6 11:24:51 2024 00:23:19.143 read: IOPS=12.0k, BW=187MiB/s (196MB/s)(375MiB/2003msec) 00:23:19.143 slat (nsec): min=2260, max=77434, avg=2606.98, stdev=1357.24 00:23:19.143 clat (usec): min=224, max=12466, avg=6228.17, stdev=1511.42 00:23:19.143 lat (usec): min=230, max=12473, avg=6230.77, stdev=1511.51 00:23:19.143 clat percentiles (usec): 00:23:19.143 | 1.00th=[ 3228], 5.00th=[ 3884], 10.00th=[ 4293], 20.00th=[ 4883], 00:23:19.143 | 30.00th=[ 5342], 40.00th=[ 5800], 50.00th=[ 6259], 60.00th=[ 6652], 00:23:19.143 | 70.00th=[ 6915], 80.00th=[ 7373], 90.00th=[ 8160], 95.00th=[ 8717], 00:23:19.143 | 99.00th=[10290], 99.50th=[10814], 99.90th=[11600], 99.95th=[11600], 00:23:19.143 | 99.99th=[12256] 00:23:19.143 bw ( KiB/s): min=90986, max=95872, per=49.11%, avg=94154.50, stdev=2163.48, samples=4 00:23:19.143 iops : min= 5686, max= 5992, avg=5884.50, stdev=135.52, samples=4 00:23:19.143 write: IOPS=6825, BW=107MiB/s (112MB/s)(191MiB/1791msec); 0 zone resets 00:23:19.143 slat (usec): min=26, max=293, avg=29.24, stdev= 6.29 00:23:19.143 clat (usec): min=3141, max=15194, avg=7856.85, stdev=1409.29 00:23:19.143 lat (usec): min=3169, max=15221, avg=7886.09, stdev=1410.19 00:23:19.143 clat percentiles (usec): 00:23:19.143 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 6652], 00:23:19.143 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 8029], 00:23:19.143 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[10552], 00:23:19.143 | 99.00th=[11469], 99.50th=[11863], 99.90th=[12518], 99.95th=[12911], 00:23:19.143 | 99.99th=[15139] 00:23:19.143 bw ( KiB/s): min=93988, max=99712, per=89.51%, avg=97753.00, stdev=2558.89, samples=4 00:23:19.143 iops : min= 5874, max= 6232, avg=6109.50, stdev=160.05, samples=4 00:23:19.143 lat (usec) : 250=0.01% 00:23:19.143 lat (msec) : 2=0.03%, 4=4.21%, 10=91.78%, 20=3.98% 00:23:19.143 cpu : usr=84.67%, sys=13.29%, ctx=180, majf=0, minf=2 00:23:19.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:19.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:19.143 issued rwts: total=24001,12225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:19.143 00:23:19.143 Run status group 0 (all jobs): 00:23:19.143 READ: bw=187MiB/s (196MB/s), 187MiB/s-187MiB/s (196MB/s-196MB/s), io=375MiB (393MB), run=2003-2003msec 00:23:19.143 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=191MiB (200MB), run=1791-1791msec 00:23:19.143 11:24:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:19.143 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:19.143 rmmod nvme_tcp 00:23:19.403 rmmod nvme_fabrics 00:23:19.403 rmmod nvme_keyring 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1816236 ']' 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1816236 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1816236 ']' 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1816236 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816236 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816236' 00:23:19.403 killing process with pid 1816236 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1816236 00:23:19.403 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1816236 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.663 11:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.569 11:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.569 00:23:21.569 real 0m15.720s 00:23:21.569 user 0m51.378s 00:23:21.569 sys 0m6.680s 00:23:21.569 11:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.569 11:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.569 ************************************ 00:23:21.569 END TEST nvmf_fio_host 00:23:21.569 ************************************ 00:23:21.569 11:24:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:21.569 11:24:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.569 11:24:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.569 11:24:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.829 ************************************ 00:23:21.829 START TEST nvmf_failover 00:23:21.829 ************************************ 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:21.829 * Looking for test storage... 00:23:21.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.829 --rc genhtml_branch_coverage=1 00:23:21.829 --rc genhtml_function_coverage=1 00:23:21.829 --rc genhtml_legend=1 00:23:21.829 --rc geninfo_all_blocks=1 00:23:21.829 --rc geninfo_unexecuted_blocks=1 00:23:21.829 00:23:21.829 ' 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.829 --rc genhtml_branch_coverage=1 00:23:21.829 --rc genhtml_function_coverage=1 00:23:21.829 --rc genhtml_legend=1 00:23:21.829 --rc geninfo_all_blocks=1 00:23:21.829 --rc geninfo_unexecuted_blocks=1 00:23:21.829 00:23:21.829 ' 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.829 --rc genhtml_branch_coverage=1 00:23:21.829 --rc genhtml_function_coverage=1 00:23:21.829 --rc genhtml_legend=1 00:23:21.829 --rc geninfo_all_blocks=1 00:23:21.829 --rc geninfo_unexecuted_blocks=1 00:23:21.829 00:23:21.829 ' 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.829 --rc genhtml_branch_coverage=1 00:23:21.829 --rc genhtml_function_coverage=1 00:23:21.829 --rc genhtml_legend=1 00:23:21.829 --rc geninfo_all_blocks=1 00:23:21.829 --rc geninfo_unexecuted_blocks=1 00:23:21.829 00:23:21.829 ' 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:21.829 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.830 11:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.477 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:28.478 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:28.478 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:28.478 Found net devices under 0000:af:00.0: cvl_0_0 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:28.478 Found net devices under 0000:af:00.1: cvl_0_1 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:23:28.478 00:23:28.478 --- 10.0.0.2 ping statistics --- 00:23:28.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.478 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:23:28.478 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:23:28.478 00:23:28.478 --- 10.0.0.1 ping statistics --- 00:23:28.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.479 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1821485 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1821485 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1821485 ']' 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:28.479 [2024-12-06 11:25:00.770028] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:23:28.479 [2024-12-06 11:25:00.770078] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.479 [2024-12-06 11:25:00.826519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:28.479 [2024-12-06 11:25:00.866159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.479 [2024-12-06 11:25:00.866192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.479 [2024-12-06 11:25:00.866199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.479 [2024-12-06 11:25:00.866205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.479 [2024-12-06 11:25:00.866211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.479 [2024-12-06 11:25:00.867703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.479 [2024-12-06 11:25:00.867726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.479 [2024-12-06 11:25:00.867727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.479 11:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:28.479 [2024-12-06 11:25:01.152195] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.479 11:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:28.479 Malloc0 00:23:28.479 11:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:28.738 11:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:28.997 11:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.997 [2024-12-06 11:25:01.899655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.997 11:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:29.256 [2024-12-06 11:25:02.084127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:29.256 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:29.516 [2024-12-06 11:25:02.276722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1821816 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1821816 /var/tmp/bdevperf.sock 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1821816 ']' 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.516 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:29.775 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.775 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:29.775 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:30.034 NVMe0n1 00:23:30.034 11:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:30.293 00:23:30.293 11:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.293 11:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1821862 00:23:30.293 11:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:31.672 11:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.672 [2024-12-06 11:25:04.382979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.672 [2024-12-06 11:25:04.383024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.672 [2024-12-06 11:25:04.383031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.672 [2024-12-06 11:25:04.383037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.672 [2024-12-06 11:25:04.383044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.672 [2024-12-06 11:25:04.383049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.672 [2024-12-06 11:25:04.383055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.672 [2024-12-06 11:25:04.383065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.672 [2024-12-06 11:25:04.383071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 [2024-12-06 11:25:04.383379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272be0 is same with the state(6) to be set 00:23:31.673 11:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:34.959 11:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:34.959 00:23:34.959 11:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:35.216 11:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:38.539 11:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:38.539 [2024-12-06 11:25:11.232078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.539 11:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:39.472 11:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:39.731 [2024-12-06 11:25:12.435572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 [2024-12-06 11:25:12.435764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bfab0 is same with the state(6) to be set 00:23:39.731 11:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1821862 00:23:46.306 { 00:23:46.306 "results": [ 00:23:46.306 { 00:23:46.306 "job": "NVMe0n1", 00:23:46.306 "core_mask": "0x1", 00:23:46.306 "workload": "verify", 00:23:46.306 "status": "finished", 00:23:46.306 "verify_range": { 00:23:46.306 "start": 0, 00:23:46.306 "length": 16384 00:23:46.306 }, 00:23:46.306 "queue_depth": 128, 00:23:46.306 "io_size": 4096, 00:23:46.306 "runtime": 15.009601, 00:23:46.306 "iops": 12051.619493416247, 00:23:46.306 "mibps": 47.076638646157214, 00:23:46.306 "io_failed": 16621, 00:23:46.306 "io_timeout": 0, 00:23:46.306 "avg_latency_us": 9707.427418587964, 00:23:46.306 "min_latency_us": 394.70545454545453, 00:23:46.306 "max_latency_us": 28001.745454545453 00:23:46.306 } 00:23:46.306 ], 00:23:46.306 "core_count": 1 00:23:46.306 } 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1821816 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1821816 ']' 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1821816 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1821816 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1821816' 00:23:46.306 killing process with pid 1821816 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1821816 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1821816 00:23:46.306 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:46.306 [2024-12-06 11:25:02.350192] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:23:46.306 [2024-12-06 11:25:02.350245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821816 ] 00:23:46.306 [2024-12-06 11:25:02.421898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.306 [2024-12-06 11:25:02.460571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.306 Running I/O for 15 seconds... 00:23:46.306 12254.00 IOPS, 47.87 MiB/s [2024-12-06T10:25:19.244Z] [2024-12-06 11:25:04.384786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.384988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.384995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.385001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.385008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.385015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.385023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.385028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.385036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.385042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.385049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.385055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.385068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.385075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.385082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.306 [2024-12-06 11:25:04.385088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.306 [2024-12-06 11:25:04.385096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.307 [2024-12-06 11:25:04.385559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.307 [2024-12-06 11:25:04.385632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.307 [2024-12-06 11:25:04.385638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.308 [2024-12-06 11:25:04.385651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.308 [2024-12-06 11:25:04.385664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.308 [2024-12-06 11:25:04.385678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.308 [2024-12-06 11:25:04.385691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.308 [2024-12-06 11:25:04.385704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.308 [2024-12-06 11:25:04.385719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.308 [2024-12-06 11:25:04.385732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.308 [2024-12-06 11:25:04.385745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.308 [2024-12-06 11:25:04.385758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.385989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.385995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.308 [2024-12-06 11:25:04.386174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.308 [2024-12-06 11:25:04.386181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:04.386548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.309 [2024-12-06 11:25:04.386572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.309 [2024-12-06 11:25:04.386577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107824 len:8 PRP1 0x0 PRP2 0x0 00:23:46.309 [2024-12-06 11:25:04.386584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386626] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:46.309 [2024-12-06 11:25:04.386647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.309 [2024-12-06 11:25:04.386654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.309 [2024-12-06 11:25:04.386667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.309 [2024-12-06 11:25:04.386679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.309 [2024-12-06 11:25:04.386691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:04.386698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:46.309 [2024-12-06 11:25:04.389267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:46.309 [2024-12-06 11:25:04.389295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d72480 (9): Bad file descriptor 00:23:46.309 [2024-12-06 11:25:04.531148] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:46.309 11479.50 IOPS, 44.84 MiB/s [2024-12-06T10:25:19.247Z] 11831.00 IOPS, 46.21 MiB/s [2024-12-06T10:25:19.247Z] 11979.25 IOPS, 46.79 MiB/s [2024-12-06T10:25:19.247Z] [2024-12-06 11:25:08.023294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:08.023335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:08.023348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:08.023361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:08.023369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.309 [2024-12-06 11:25:08.023376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.309 [2024-12-06 11:25:08.023384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.310 [2024-12-06 11:25:08.023831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.310 [2024-12-06 11:25:08.023838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.023988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.023995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-12-06 11:25:08.024384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-12-06 11:25:08.024391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122120 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.312 [2024-12-06 11:25:08.024471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.312 [2024-12-06 11:25:08.024486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.312 [2024-12-06 11:25:08.024499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.312 [2024-12-06 11:25:08.024511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72480 is same with the state(6) to be set 00:23:46.312 [2024-12-06 11:25:08.024637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122128 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122136 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122144 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122152 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122160 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122168 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122176 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122184 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122192 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122200 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122208 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122216 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121208 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121216 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121224 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.024982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121232 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.024987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.024994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.024999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.025004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121240 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.025010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.025016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.025021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.025026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121248 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.025031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.025038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.025042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.025047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121256 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.025053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-12-06 11:25:08.025066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.312 [2024-12-06 11:25:08.025071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.312 [2024-12-06 11:25:08.025076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121264 len:8 PRP1 0x0 PRP2 0x0 00:23:46.312 [2024-12-06 11:25:08.025082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121272 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121280 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121288 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121296 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121304 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121312 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121320 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121328 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121336 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121344 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121352 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121360 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121368 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121376 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121384 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.025422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.025426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.025431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122224 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.025437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.035383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.035395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.035404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121392 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.035413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.035422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.035429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.035435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121400 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.035444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.035452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.035458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.035465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121408 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.035473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.035481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.035488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.035494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121416 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.035502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.035511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.035517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.035524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121424 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.035532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.035540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.035547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.035554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121432 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.035562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.035570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.035576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.313 [2024-12-06 11:25:08.035582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121440 len:8 PRP1 0x0 PRP2 0x0 00:23:46.313 [2024-12-06 11:25:08.035590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-12-06 11:25:08.035599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.313 [2024-12-06 11:25:08.035605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121448 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121456 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121464 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121472 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121480 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121488 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121496 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121504 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121512 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121520 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121528 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121536 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121544 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.035976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.035982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.035989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121552 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.035996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121560 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.036025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121568 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.036054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121576 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.036089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121584 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.036118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121592 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.036147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121600 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.036176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121608 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.036205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121616 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.036234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121624 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-12-06 11:25:08.036264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-12-06 11:25:08.036272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-12-06 11:25:08.036278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-12-06 11:25:08.036285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121632 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121640 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121648 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121656 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121664 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121672 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121680 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121688 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121696 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121704 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121712 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121720 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121728 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121736 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121744 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121752 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121760 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121768 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121776 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121784 len:8 PRP1 0x0 PRP2 0x0 00:23:46.315 [2024-12-06 11:25:08.036850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.315 [2024-12-06 11:25:08.036858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.315 [2024-12-06 11:25:08.036864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.315 [2024-12-06 11:25:08.036871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121792 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.036878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.036886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.036892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.036899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121800 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.036907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.036917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.036924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.036930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121808 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.036938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.036947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.036952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.036959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121816 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.036968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.036976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.036984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.036991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121824 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.036998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121832 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121840 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121848 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121856 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121864 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121872 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121880 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121888 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121896 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121904 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.037294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.037303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.037309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.037316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121912 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.044404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.044422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.044431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.044440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121920 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.044451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.044462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.044470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.044479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121928 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.044490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.044502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.044510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.044519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121936 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.044529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.044541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.044549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.044558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121944 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.044573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.044584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.044592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.044601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121952 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.044611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.044623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.044631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.044640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121960 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.044650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.044662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.044670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.044679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121968 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.044689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.044700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.316 [2024-12-06 11:25:08.044708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.316 [2024-12-06 11:25:08.044718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121976 len:8 PRP1 0x0 PRP2 0x0 00:23:46.316 [2024-12-06 11:25:08.044729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.316 [2024-12-06 11:25:08.044740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.044748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.044757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121984 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.044768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.044779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.044787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.044796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121992 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.044806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.044818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.044826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.044835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122000 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.044845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.044856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.044866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.044876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122008 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.044886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.044897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.044905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.044914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122016 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.044925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.044936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.044944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.044953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122024 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.044964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.044975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.044983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.044992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122032 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122040 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122048 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122056 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122064 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122072 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122080 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122088 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122096 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122104 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122112 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.317 [2024-12-06 11:25:08.045424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.317 [2024-12-06 11:25:08.045433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122120 len:8 PRP1 0x0 PRP2 0x0 00:23:46.317 [2024-12-06 11:25:08.045443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:08.045497] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:46.317 [2024-12-06 11:25:08.045511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:46.317 [2024-12-06 11:25:08.045555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d72480 (9): Bad file descriptor 00:23:46.317 [2024-12-06 11:25:08.050251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:46.317 [2024-12-06 11:25:08.161514] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:46.317 11696.80 IOPS, 45.69 MiB/s [2024-12-06T10:25:19.255Z] 11797.83 IOPS, 46.09 MiB/s [2024-12-06T10:25:19.255Z] 11863.29 IOPS, 46.34 MiB/s [2024-12-06T10:25:19.255Z] 11917.12 IOPS, 46.55 MiB/s [2024-12-06T10:25:19.255Z] 11948.44 IOPS, 46.67 MiB/s [2024-12-06T10:25:19.255Z] [2024-12-06 11:25:12.436508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.317 [2024-12-06 11:25:12.436539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:12.436552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.317 [2024-12-06 11:25:12.436560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:12.436569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.317 [2024-12-06 11:25:12.436575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:12.436583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.317 [2024-12-06 11:25:12.436589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:12.436596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.317 [2024-12-06 11:25:12.436603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:12.436611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.317 [2024-12-06 11:25:12.436617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.317 [2024-12-06 11:25:12.436625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.436631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.436658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.436672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.436686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.436707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.436720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.436733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.436747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.436760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.436991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.436998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.437004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.437017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.437032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.437047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.437066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.437080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.318 [2024-12-06 11:25:12.437093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.437106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.437120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.437133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.437146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.437160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.318 [2024-12-06 11:25:12.437173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.318 [2024-12-06 11:25:12.437181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-12-06 11:25:12.437415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.319 [2024-12-06 11:25:12.437645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.319 [2024-12-06 11:25:12.437651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.320 [2024-12-06 11:25:12.437820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.437848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61480 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.437854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.437868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.437873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61488 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.437879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.437892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.437897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61496 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.437902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.437913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.437918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61504 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.437923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.437935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.437940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61512 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.437945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.437956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61520 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.437966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.437977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.437982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61528 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.437987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.437993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.437998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61536 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.438018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61544 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.438038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61552 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.438065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61560 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.438086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61568 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.438108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61576 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.438129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61584 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.438150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61592 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.438170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.320 [2024-12-06 11:25:12.438191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.320 [2024-12-06 11:25:12.438196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61608 len:8 PRP1 0x0 PRP2 0x0 00:23:46.320 [2024-12-06 11:25:12.438202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.320 [2024-12-06 11:25:12.438208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.438212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.438217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61616 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.438222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.438231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.438236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.438241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61624 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.438246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.438252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.438257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.438263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61632 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.438269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.438275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.438279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.438284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61640 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.438290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.438296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.438301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.438307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61648 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.438312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.438318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.438323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.438328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61656 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.438334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.438340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.438345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.438350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61664 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.438355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.438361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.438365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.438370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61672 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.438380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.438386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.438391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61680 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.448544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61688 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.448572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61696 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.448595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61704 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.448619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61712 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.448642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61720 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.448665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61728 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.448688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61736 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.448713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61744 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.321 [2024-12-06 11:25:12.448736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.321 [2024-12-06 11:25:12.448742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61752 len:8 PRP1 0x0 PRP2 0x0 00:23:46.321 [2024-12-06 11:25:12.448748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448792] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:46.321 [2024-12-06 11:25:12.448816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.321 [2024-12-06 11:25:12.448824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.321 [2024-12-06 11:25:12.448839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.321 [2024-12-06 11:25:12.448853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.321 [2024-12-06 11:25:12.448868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.321 [2024-12-06 11:25:12.448875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:46.321 [2024-12-06 11:25:12.448906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d72480 (9): Bad file descriptor 00:23:46.321 [2024-12-06 11:25:12.451598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:46.321 [2024-12-06 11:25:12.515629] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:46.321 11887.20 IOPS, 46.43 MiB/s [2024-12-06T10:25:19.259Z] 11944.64 IOPS, 46.66 MiB/s [2024-12-06T10:25:19.259Z] 11971.25 IOPS, 46.76 MiB/s [2024-12-06T10:25:19.259Z] 12000.62 IOPS, 46.88 MiB/s [2024-12-06T10:25:19.259Z] 12039.36 IOPS, 47.03 MiB/s [2024-12-06T10:25:19.259Z] 12050.87 IOPS, 47.07 MiB/s 00:23:46.321 Latency(us) 00:23:46.321 [2024-12-06T10:25:19.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.321 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:46.321 Verification LBA range: start 0x0 length 0x4000 00:23:46.321 NVMe0n1 : 15.01 12051.62 47.08 1107.36 0.00 9707.43 394.71 28001.75 00:23:46.321 [2024-12-06T10:25:19.259Z] =================================================================================================================== 00:23:46.321 [2024-12-06T10:25:19.260Z] Total : 12051.62 47.08 1107.36 0.00 9707.43 394.71 28001.75 00:23:46.322 Received shutdown signal, test time was about 15.000000 seconds 00:23:46.322 00:23:46.322 Latency(us) 00:23:46.322 [2024-12-06T10:25:19.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.322 [2024-12-06T10:25:19.260Z] =================================================================================================================== 00:23:46.322 [2024-12-06T10:25:19.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1824576 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1824576 /var/tmp/bdevperf.sock 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1824576 ']' 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:46.322 11:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:46.322 [2024-12-06 11:25:19.001989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:46.322 11:25:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:46.322 [2024-12-06 11:25:19.182490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:46.322 11:25:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:46.890 NVMe0n1 00:23:46.890 11:25:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:47.149 00:23:47.150 11:25:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:47.408 00:23:47.409 11:25:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:47.409 11:25:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:47.667 11:25:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:47.667 11:25:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:50.956 11:25:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:50.957 11:25:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:50.957 11:25:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:50.957 11:25:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1825537 00:23:50.957 11:25:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1825537 00:23:52.339 { 00:23:52.339 "results": [ 00:23:52.339 { 00:23:52.339 "job": "NVMe0n1", 00:23:52.339 "core_mask": "0x1", 00:23:52.339 "workload": "verify", 00:23:52.339 "status": "finished", 00:23:52.339 "verify_range": { 00:23:52.339 "start": 0, 00:23:52.339 "length": 16384 00:23:52.339 }, 00:23:52.339 "queue_depth": 128, 00:23:52.339 "io_size": 4096, 00:23:52.339 "runtime": 1.006413, 00:23:52.339 "iops": 12365.698773763852, 00:23:52.339 "mibps": 48.30351083501505, 00:23:52.339 "io_failed": 0, 00:23:52.339 "io_timeout": 0, 00:23:52.339 "avg_latency_us": 10303.090876949487, 00:23:52.339 "min_latency_us": 1995.8690909090908, 00:23:52.339 "max_latency_us": 8936.727272727272 00:23:52.339 } 00:23:52.339 ], 00:23:52.339 "core_count": 1 00:23:52.339 } 00:23:52.339 11:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:52.339 [2024-12-06 11:25:18.633452] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:23:52.339 [2024-12-06 11:25:18.633501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1824576 ] 00:23:52.339 [2024-12-06 11:25:18.707570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.339 [2024-12-06 11:25:18.741705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.339 [2024-12-06 11:25:20.556358] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:52.339 [2024-12-06 11:25:20.556406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.340 [2024-12-06 11:25:20.556417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.340 [2024-12-06 11:25:20.556425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.340 [2024-12-06 11:25:20.556432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.340 [2024-12-06 11:25:20.556439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.340 [2024-12-06 11:25:20.556445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.340 [2024-12-06 11:25:20.556452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.340 [2024-12-06 11:25:20.556458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.340 [2024-12-06 11:25:20.556465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:52.340 [2024-12-06 11:25:20.556490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:52.340 [2024-12-06 11:25:20.556505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141a480 (9): Bad file descriptor 00:23:52.340 [2024-12-06 11:25:20.688198] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:52.340 Running I/O for 1 seconds... 00:23:52.340 12288.00 IOPS, 48.00 MiB/s 00:23:52.340 Latency(us) 00:23:52.340 [2024-12-06T10:25:25.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.340 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:52.340 Verification LBA range: start 0x0 length 0x4000 00:23:52.340 NVMe0n1 : 1.01 12365.70 48.30 0.00 0.00 10303.09 1995.87 8936.73 00:23:52.340 [2024-12-06T10:25:25.278Z] =================================================================================================================== 00:23:52.340 [2024-12-06T10:25:25.278Z] Total : 12365.70 48.30 0.00 0.00 10303.09 1995.87 8936.73 00:23:52.340 11:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.340 11:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:52.340 11:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:52.340 11:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.340 11:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:52.598 11:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:52.855 11:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1824576 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1824576 ']' 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1824576 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1824576 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1824576' 00:23:56.138 killing process with pid 1824576 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1824576 00:23:56.138 11:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1824576 00:23:56.138 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:56.138 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.397 rmmod nvme_tcp 00:23:56.397 rmmod nvme_fabrics 00:23:56.397 rmmod nvme_keyring 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1821485 ']' 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1821485 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1821485 ']' 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1821485 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.397 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1821485 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1821485' 00:23:56.657 killing process with pid 1821485 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1821485 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1821485 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.657 11:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:59.195 00:23:59.195 real 0m37.104s 00:23:59.195 user 1m56.561s 00:23:59.195 sys 0m8.113s 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.195 ************************************ 00:23:59.195 END TEST nvmf_failover 00:23:59.195 ************************************ 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.195 ************************************ 00:23:59.195 START TEST nvmf_host_discovery 00:23:59.195 ************************************ 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:59.195 * Looking for test storage... 00:23:59.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:59.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.195 --rc genhtml_branch_coverage=1 00:23:59.195 --rc genhtml_function_coverage=1 00:23:59.195 --rc genhtml_legend=1 00:23:59.195 --rc geninfo_all_blocks=1 00:23:59.195 --rc geninfo_unexecuted_blocks=1 00:23:59.195 00:23:59.195 ' 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:59.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.195 --rc genhtml_branch_coverage=1 00:23:59.195 --rc genhtml_function_coverage=1 00:23:59.195 --rc genhtml_legend=1 00:23:59.195 --rc geninfo_all_blocks=1 00:23:59.195 --rc geninfo_unexecuted_blocks=1 00:23:59.195 00:23:59.195 ' 00:23:59.195 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:59.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.195 --rc genhtml_branch_coverage=1 00:23:59.195 --rc genhtml_function_coverage=1 00:23:59.196 --rc genhtml_legend=1 00:23:59.196 --rc geninfo_all_blocks=1 00:23:59.196 --rc geninfo_unexecuted_blocks=1 00:23:59.196 00:23:59.196 ' 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:59.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.196 --rc genhtml_branch_coverage=1 00:23:59.196 --rc genhtml_function_coverage=1 00:23:59.196 --rc genhtml_legend=1 00:23:59.196 --rc geninfo_all_blocks=1 00:23:59.196 --rc geninfo_unexecuted_blocks=1 00:23:59.196 00:23:59.196 ' 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:59.196 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:05.766 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:05.766 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:05.766 Found net devices under 0000:af:00.0: cvl_0_0 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:05.766 Found net devices under 0000:af:00.1: cvl_0_1 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:05.766 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:05.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:24:05.767 00:24:05.767 --- 10.0.0.2 ping statistics --- 00:24:05.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.767 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:24:05.767 00:24:05.767 --- 10.0.0.1 ping statistics --- 00:24:05.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.767 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1830070 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1830070 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1830070 ']' 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.767 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 [2024-12-06 11:25:37.941168] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:24:05.767 [2024-12-06 11:25:37.941208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.767 [2024-12-06 11:25:37.995997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.767 [2024-12-06 11:25:38.033858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.767 [2024-12-06 11:25:38.033889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.767 [2024-12-06 11:25:38.033895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.767 [2024-12-06 11:25:38.033900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.767 [2024-12-06 11:25:38.033904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.767 [2024-12-06 11:25:38.034452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 [2024-12-06 11:25:38.164664] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 [2024-12-06 11:25:38.176827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 null0 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 null1 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1830165 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1830165 /tmp/host.sock 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1830165 ']' 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:05.767 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.767 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 [2024-12-06 11:25:38.253268] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:24:05.768 [2024-12-06 11:25:38.253310] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830165 ] 00:24:05.768 [2024-12-06 11:25:38.325291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.768 [2024-12-06 11:25:38.364845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.768 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.027 [2024-12-06 11:25:38.786345] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.027 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:06.028 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.287 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:06.287 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:06.879 [2024-12-06 11:25:39.522464] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:06.879 [2024-12-06 11:25:39.522481] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:06.879 [2024-12-06 11:25:39.522492] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:06.879 [2024-12-06 11:25:39.608743] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:06.880 [2024-12-06 11:25:39.743605] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:06.880 [2024-12-06 11:25:39.744268] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14bede0:1 started. 00:24:06.880 [2024-12-06 11:25:39.745524] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:06.880 [2024-12-06 11:25:39.745538] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:06.880 [2024-12-06 11:25:39.791578] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14bede0 was disconnected and freed. delete nvme_qpair. 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:07.139 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.139 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:07.399 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.400 [2024-12-06 11:25:40.186401] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14bf160:1 started. 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.400 [2024-12-06 11:25:40.192388] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14bf160 was disconnected and freed. delete nvme_qpair. 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.400 [2024-12-06 11:25:40.290401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:07.400 [2024-12-06 11:25:40.291148] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:07.400 [2024-12-06 11:25:40.291169] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:07.400 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:07.659 [2024-12-06 11:25:40.377405] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:07.659 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:07.659 [2024-12-06 11:25:40.479261] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:07.659 [2024-12-06 11:25:40.479295] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:07.659 [2024-12-06 11:25:40.479303] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:07.659 [2024-12-06 11:25:40.479307] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:08.596 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.597 [2024-12-06 11:25:41.525957] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:08.597 [2024-12-06 11:25:41.525978] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:08.597 [2024-12-06 11:25:41.528069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.597 [2024-12-06 11:25:41.528092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.597 [2024-12-06 11:25:41.528101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.597 [2024-12-06 11:25:41.528107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.597 [2024-12-06 11:25:41.528114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.597 [2024-12-06 11:25:41.528120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.597 [2024-12-06 11:25:41.528127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.597 [2024-12-06 11:25:41.528133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.597 [2024-12-06 11:25:41.528139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1490f30 is same with the state(6) to be set 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:08.597 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:08.856 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:08.856 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:08.856 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:08.856 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:08.857 [2024-12-06 11:25:41.538080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1490f30 (9): Bad file descriptor 00:24:08.857 [2024-12-06 11:25:41.548114] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:08.857 [2024-12-06 11:25:41.548126] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:08.857 [2024-12-06 11:25:41.548133] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:08.857 [2024-12-06 11:25:41.548137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:08.857 [2024-12-06 11:25:41.548154] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:08.857 [2024-12-06 11:25:41.548354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-12-06 11:25:41.548368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1490f30 with addr=10.0.0.2, port=4420 00:24:08.857 [2024-12-06 11:25:41.548375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1490f30 is same with the state(6) to be set 00:24:08.857 [2024-12-06 11:25:41.548386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1490f30 (9): Bad file descriptor 00:24:08.857 [2024-12-06 11:25:41.548411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:08.857 [2024-12-06 11:25:41.548417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:08.857 [2024-12-06 11:25:41.548425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:08.857 [2024-12-06 11:25:41.548431] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:08.857 [2024-12-06 11:25:41.548435] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:08.857 [2024-12-06 11:25:41.548439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.857 [2024-12-06 11:25:41.558184] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:08.857 [2024-12-06 11:25:41.558195] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:08.857 [2024-12-06 11:25:41.558199] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:08.857 [2024-12-06 11:25:41.558203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:08.857 [2024-12-06 11:25:41.558217] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:08.857 [2024-12-06 11:25:41.558403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-12-06 11:25:41.558415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1490f30 with addr=10.0.0.2, port=4420 00:24:08.857 [2024-12-06 11:25:41.558422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1490f30 is same with the state(6) to be set 00:24:08.857 [2024-12-06 11:25:41.558433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1490f30 (9): Bad file descriptor 00:24:08.857 [2024-12-06 11:25:41.558442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:08.857 [2024-12-06 11:25:41.558448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:08.857 [2024-12-06 11:25:41.558454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:08.857 [2024-12-06 11:25:41.558460] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:08.857 [2024-12-06 11:25:41.558464] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:08.857 [2024-12-06 11:25:41.558467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:08.857 [2024-12-06 11:25:41.568247] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:08.857 [2024-12-06 11:25:41.568257] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:08.857 [2024-12-06 11:25:41.568261] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:08.857 [2024-12-06 11:25:41.568265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:08.857 [2024-12-06 11:25:41.568277] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:08.857 [2024-12-06 11:25:41.568427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-12-06 11:25:41.568439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1490f30 with addr=10.0.0.2, port=4420 00:24:08.857 [2024-12-06 11:25:41.568446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1490f30 is same with the state(6) to be set 00:24:08.857 [2024-12-06 11:25:41.568459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1490f30 (9): Bad file descriptor 00:24:08.857 [2024-12-06 11:25:41.568468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:08.857 [2024-12-06 11:25:41.568474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:08.857 [2024-12-06 11:25:41.568480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:08.857 [2024-12-06 11:25:41.568485] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:08.857 [2024-12-06 11:25:41.568489] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:08.857 [2024-12-06 11:25:41.568492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:08.857 [2024-12-06 11:25:41.578307] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:08.857 [2024-12-06 11:25:41.578320] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:08.857 [2024-12-06 11:25:41.578324] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:08.857 [2024-12-06 11:25:41.578327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:08.857 [2024-12-06 11:25:41.578341] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:08.857 [2024-12-06 11:25:41.578596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-12-06 11:25:41.578610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1490f30 with addr=10.0.0.2, port=4420 00:24:08.857 [2024-12-06 11:25:41.578617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1490f30 is same with the state(6) to be set 00:24:08.857 [2024-12-06 11:25:41.578626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1490f30 (9): Bad file descriptor 00:24:08.857 [2024-12-06 11:25:41.578635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:08.857 [2024-12-06 11:25:41.578641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:08.857 [2024-12-06 11:25:41.578650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:08.857 [2024-12-06 11:25:41.578656] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:08.857 [2024-12-06 11:25:41.578660] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:08.857 [2024-12-06 11:25:41.578664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.857 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:08.857 [2024-12-06 11:25:41.588371] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:08.857 [2024-12-06 11:25:41.588384] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:08.857 [2024-12-06 11:25:41.588388] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:08.857 [2024-12-06 11:25:41.588392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:08.857 [2024-12-06 11:25:41.588406] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:08.857 [2024-12-06 11:25:41.588647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-12-06 11:25:41.588660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1490f30 with addr=10.0.0.2, port=4420 00:24:08.857 [2024-12-06 11:25:41.588668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1490f30 is same with the state(6) to be set 00:24:08.857 [2024-12-06 11:25:41.588678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1490f30 (9): Bad file descriptor 00:24:08.857 [2024-12-06 11:25:41.588688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:08.857 [2024-12-06 11:25:41.588695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:08.858 [2024-12-06 11:25:41.588702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:08.858 [2024-12-06 11:25:41.588708] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:08.858 [2024-12-06 11:25:41.588712] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:08.858 [2024-12-06 11:25:41.588716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:08.858 [2024-12-06 11:25:41.598436] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:08.858 [2024-12-06 11:25:41.598447] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:08.858 [2024-12-06 11:25:41.598451] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:08.858 [2024-12-06 11:25:41.598454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:08.858 [2024-12-06 11:25:41.598467] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:08.858 [2024-12-06 11:25:41.598625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-12-06 11:25:41.598637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1490f30 with addr=10.0.0.2, port=4420 00:24:08.858 [2024-12-06 11:25:41.598645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1490f30 is same with the state(6) to be set 00:24:08.858 [2024-12-06 11:25:41.598655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1490f30 (9): Bad file descriptor 00:24:08.858 [2024-12-06 11:25:41.598663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:08.858 [2024-12-06 11:25:41.598672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:08.858 [2024-12-06 11:25:41.598679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:08.858 [2024-12-06 11:25:41.598685] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:08.858 [2024-12-06 11:25:41.598689] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:08.858 [2024-12-06 11:25:41.598693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:08.858 [2024-12-06 11:25:41.608498] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:08.858 [2024-12-06 11:25:41.608508] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:08.858 [2024-12-06 11:25:41.608512] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:08.858 [2024-12-06 11:25:41.608515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:08.858 [2024-12-06 11:25:41.608528] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:08.858 [2024-12-06 11:25:41.608741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-12-06 11:25:41.608753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1490f30 with addr=10.0.0.2, port=4420 00:24:08.858 [2024-12-06 11:25:41.608760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1490f30 is same with the state(6) to be set 00:24:08.858 [2024-12-06 11:25:41.608770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1490f30 (9): Bad file descriptor 00:24:08.858 [2024-12-06 11:25:41.608779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:08.858 [2024-12-06 11:25:41.608785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:08.858 [2024-12-06 11:25:41.608792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:08.858 [2024-12-06 11:25:41.608797] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:08.858 [2024-12-06 11:25:41.608801] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:08.858 [2024-12-06 11:25:41.608805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:08.858 [2024-12-06 11:25:41.612842] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:08.858 [2024-12-06 11:25:41.612857] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.858 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:08.859 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.859 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:08.859 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.859 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:09.116 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.116 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:09.116 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:09.116 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.117 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.053 [2024-12-06 11:25:42.900474] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:10.053 [2024-12-06 11:25:42.900488] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:10.053 [2024-12-06 11:25:42.900497] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:10.311 [2024-12-06 11:25:43.026871] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:10.571 [2024-12-06 11:25:43.291147] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:10.571 [2024-12-06 11:25:43.291728] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x14c2c60:1 started. 00:24:10.571 [2024-12-06 11:25:43.293229] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:10.572 [2024-12-06 11:25:43.293252] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.572 [2024-12-06 11:25:43.300412] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x14c2c60 was disconnected and freed. delete nvme_qpair. 00:24:10.572 request: 00:24:10.572 { 00:24:10.572 "name": "nvme", 00:24:10.572 "trtype": "tcp", 00:24:10.572 "traddr": "10.0.0.2", 00:24:10.572 "adrfam": "ipv4", 00:24:10.572 "trsvcid": "8009", 00:24:10.572 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:10.572 "wait_for_attach": true, 00:24:10.572 "method": "bdev_nvme_start_discovery", 00:24:10.572 "req_id": 1 00:24:10.572 } 00:24:10.572 Got JSON-RPC error response 00:24:10.572 response: 00:24:10.572 { 00:24:10.572 "code": -17, 00:24:10.572 "message": "File exists" 00:24:10.572 } 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.572 request: 00:24:10.572 { 00:24:10.572 "name": "nvme_second", 00:24:10.572 "trtype": "tcp", 00:24:10.572 "traddr": "10.0.0.2", 00:24:10.572 "adrfam": "ipv4", 00:24:10.572 "trsvcid": "8009", 00:24:10.572 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:10.572 "wait_for_attach": true, 00:24:10.572 "method": "bdev_nvme_start_discovery", 00:24:10.572 "req_id": 1 00:24:10.572 } 00:24:10.572 Got JSON-RPC error response 00:24:10.572 response: 00:24:10.572 { 00:24:10.572 "code": -17, 00:24:10.572 "message": "File exists" 00:24:10.572 } 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:10.572 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.832 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.769 [2024-12-06 11:25:44.532663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.769 [2024-12-06 11:25:44.532687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c0bb0 with addr=10.0.0.2, port=8010 00:24:11.769 [2024-12-06 11:25:44.532700] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:11.769 [2024-12-06 11:25:44.532706] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:11.769 [2024-12-06 11:25:44.532712] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:12.703 [2024-12-06 11:25:45.535081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.703 [2024-12-06 11:25:45.535104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c0bb0 with addr=10.0.0.2, port=8010 00:24:12.703 [2024-12-06 11:25:45.535115] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:12.703 [2024-12-06 11:25:45.535121] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:12.703 [2024-12-06 11:25:45.535126] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:13.637 [2024-12-06 11:25:46.537281] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:13.637 request: 00:24:13.637 { 00:24:13.637 "name": "nvme_second", 00:24:13.637 "trtype": "tcp", 00:24:13.637 "traddr": "10.0.0.2", 00:24:13.637 "adrfam": "ipv4", 00:24:13.637 "trsvcid": "8010", 00:24:13.637 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:13.637 "wait_for_attach": false, 00:24:13.637 "attach_timeout_ms": 3000, 00:24:13.637 "method": "bdev_nvme_start_discovery", 00:24:13.637 "req_id": 1 00:24:13.637 } 00:24:13.637 Got JSON-RPC error response 00:24:13.637 response: 00:24:13.637 { 00:24:13.637 "code": -110, 00:24:13.637 "message": "Connection timed out" 00:24:13.637 } 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:13.637 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1830165 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.896 rmmod nvme_tcp 00:24:13.896 rmmod nvme_fabrics 00:24:13.896 rmmod nvme_keyring 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1830070 ']' 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1830070 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1830070 ']' 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1830070 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1830070 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1830070' 00:24:13.896 killing process with pid 1830070 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1830070 00:24:13.896 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1830070 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.156 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.063 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.063 00:24:16.063 real 0m17.267s 00:24:16.063 user 0m20.538s 00:24:16.063 sys 0m5.830s 00:24:16.063 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.063 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.063 ************************************ 00:24:16.063 END TEST nvmf_host_discovery 00:24:16.063 ************************************ 00:24:16.063 11:25:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:16.063 11:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.063 11:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.063 11:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.323 ************************************ 00:24:16.323 START TEST nvmf_host_multipath_status 00:24:16.323 ************************************ 00:24:16.323 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:16.323 * Looking for test storage... 00:24:16.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.323 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.323 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.323 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.323 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.323 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.323 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:16.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.324 --rc genhtml_branch_coverage=1 00:24:16.324 --rc genhtml_function_coverage=1 00:24:16.324 --rc genhtml_legend=1 00:24:16.324 --rc geninfo_all_blocks=1 00:24:16.324 --rc geninfo_unexecuted_blocks=1 00:24:16.324 00:24:16.324 ' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:16.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.324 --rc genhtml_branch_coverage=1 00:24:16.324 --rc genhtml_function_coverage=1 00:24:16.324 --rc genhtml_legend=1 00:24:16.324 --rc geninfo_all_blocks=1 00:24:16.324 --rc geninfo_unexecuted_blocks=1 00:24:16.324 00:24:16.324 ' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:16.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.324 --rc genhtml_branch_coverage=1 00:24:16.324 --rc genhtml_function_coverage=1 00:24:16.324 --rc genhtml_legend=1 00:24:16.324 --rc geninfo_all_blocks=1 00:24:16.324 --rc geninfo_unexecuted_blocks=1 00:24:16.324 00:24:16.324 ' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:16.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.324 --rc genhtml_branch_coverage=1 00:24:16.324 --rc genhtml_function_coverage=1 00:24:16.324 --rc genhtml_legend=1 00:24:16.324 --rc geninfo_all_blocks=1 00:24:16.324 --rc geninfo_unexecuted_blocks=1 00:24:16.324 00:24:16.324 ' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:16.324 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.325 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.897 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:22.898 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:22.898 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:22.898 Found net devices under 0000:af:00.0: cvl_0_0 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:22.898 Found net devices under 0000:af:00.1: cvl_0_1 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.898 11:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:24:22.898 00:24:22.898 --- 10.0.0.2 ping statistics --- 00:24:22.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.898 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:24:22.898 00:24:22.898 --- 10.0.0.1 ping statistics --- 00:24:22.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.898 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1835559 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1835559 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1835559 ']' 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.898 11:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:22.898 [2024-12-06 11:25:55.295333] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:24:22.899 [2024-12-06 11:25:55.295377] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.899 [2024-12-06 11:25:55.371655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:22.899 [2024-12-06 11:25:55.409164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.899 [2024-12-06 11:25:55.409197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.899 [2024-12-06 11:25:55.409203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.899 [2024-12-06 11:25:55.409208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.899 [2024-12-06 11:25:55.409213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.899 [2024-12-06 11:25:55.410416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.899 [2024-12-06 11:25:55.410416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.467 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.467 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:23.467 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.467 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.467 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:23.467 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.467 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1835559 00:24:23.467 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:23.467 [2024-12-06 11:25:56.291999] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.467 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:23.726 Malloc0 00:24:23.726 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:23.986 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.986 11:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.245 [2024-12-06 11:25:57.074575] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.245 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:24.504 [2024-12-06 11:25:57.255029] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1835856 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1835856 /var/tmp/bdevperf.sock 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1835856 ']' 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.504 11:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:25.438 11:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.438 11:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:25.438 11:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:25.438 11:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:25.696 Nvme0n1 00:24:25.954 11:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:26.212 Nvme0n1 00:24:26.212 11:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:26.212 11:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:28.737 11:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:28.737 11:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:28.737 11:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:28.737 11:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:29.671 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:29.671 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:29.671 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.671 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.931 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.931 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:29.931 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.931 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:30.190 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.190 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:30.190 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.190 11:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:30.190 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.190 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:30.190 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.190 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.449 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.449 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:30.449 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.449 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.708 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.708 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.708 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.708 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.967 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.967 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:30.967 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:30.967 11:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:31.226 11:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:32.163 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:32.163 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:32.163 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.163 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.430 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.430 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:32.430 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.430 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.690 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.690 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.690 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.690 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.949 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.949 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.949 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.949 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:33.208 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.208 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:33.208 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.208 11:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.208 11:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.208 11:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.208 11:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.208 11:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.468 11:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.468 11:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:33.468 11:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:33.727 11:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:33.986 11:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:34.924 11:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:34.924 11:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:34.924 11:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.924 11:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:35.183 11:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.183 11:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:35.183 11:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.183 11:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:35.442 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.442 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:35.442 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.442 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.442 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.442 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.442 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.442 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.700 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.700 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.700 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.700 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.960 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.960 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:35.960 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.960 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:36.220 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.220 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:36.220 11:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:36.220 11:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:36.575 11:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:37.554 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:37.554 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:37.554 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.554 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.813 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.813 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:37.813 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.813 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.813 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.813 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.813 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.813 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.072 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.072 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:38.072 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.072 11:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.331 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.331 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:38.331 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.331 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.589 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.589 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:38.589 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.589 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.590 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.590 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:38.590 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:38.849 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:39.107 11:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:40.043 11:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:40.043 11:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:40.043 11:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.043 11:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.302 11:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.302 11:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:40.302 11:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.302 11:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.302 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.302 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.302 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.302 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:40.561 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.561 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:40.561 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.561 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:40.820 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.820 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:40.820 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:40.820 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.820 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.820 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:40.820 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.820 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.079 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.079 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:41.079 11:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:41.337 11:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:41.596 11:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:42.531 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:42.531 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:42.531 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.531 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:42.789 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:42.789 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:42.789 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:42.789 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.789 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.789 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:42.789 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.789 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.047 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.047 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.047 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.047 11:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.305 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.305 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:43.305 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:43.305 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.564 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.564 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:43.564 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:43.564 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.564 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.564 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:43.822 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:43.822 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:44.081 11:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:44.339 11:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:45.276 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:45.276 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:45.276 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.276 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:45.534 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.534 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:45.534 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.535 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:45.535 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.535 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:45.535 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.535 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:45.793 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.793 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:45.793 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.793 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:46.051 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.051 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:46.051 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:46.051 11:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.310 11:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.310 11:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:46.310 11:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.310 11:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:46.310 11:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.310 11:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:46.310 11:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:46.569 11:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:46.828 11:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:47.764 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:47.764 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:47.764 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.764 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.023 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.023 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:48.023 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.023 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:48.290 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.290 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:48.290 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.290 11:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:48.290 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.290 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:48.290 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.290 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:48.550 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.550 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:48.550 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.550 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:48.807 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.807 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:48.807 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.807 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.065 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.065 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:49.065 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:49.065 11:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:49.322 11:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:50.255 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:50.255 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:50.255 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.255 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:50.513 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.513 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:50.513 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.513 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:50.771 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.771 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:50.771 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.771 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.029 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.029 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.029 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.029 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:51.029 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.029 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:51.029 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.029 11:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:51.286 11:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.286 11:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:51.286 11:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.286 11:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:51.543 11:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.543 11:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:51.543 11:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:51.802 11:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:51.802 11:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:53.177 11:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:53.177 11:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:53.177 11:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.177 11:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:53.177 11:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.177 11:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:53.177 11:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.177 11:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:53.435 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:53.435 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:53.435 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.435 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:53.435 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.435 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:53.435 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.435 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:53.693 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.693 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:53.693 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.693 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1835856 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1835856 ']' 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1835856 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.952 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1835856 00:24:54.230 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:54.230 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:54.230 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1835856' 00:24:54.230 killing process with pid 1835856 00:24:54.230 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1835856 00:24:54.230 11:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1835856 00:24:54.230 { 00:24:54.230 "results": [ 00:24:54.230 { 00:24:54.230 "job": "Nvme0n1", 00:24:54.230 "core_mask": "0x4", 00:24:54.230 "workload": "verify", 00:24:54.230 "status": "terminated", 00:24:54.230 "verify_range": { 00:24:54.230 "start": 0, 00:24:54.230 "length": 16384 00:24:54.230 }, 00:24:54.230 "queue_depth": 128, 00:24:54.230 "io_size": 4096, 00:24:54.230 "runtime": 27.663802, 00:24:54.230 "iops": 11483.237192053355, 00:24:54.230 "mibps": 44.85639528145842, 00:24:54.230 "io_failed": 0, 00:24:54.230 "io_timeout": 0, 00:24:54.230 "avg_latency_us": 11128.474874589696, 00:24:54.230 "min_latency_us": 781.9636363636364, 00:24:54.230 "max_latency_us": 3019898.88 00:24:54.230 } 00:24:54.230 ], 00:24:54.230 "core_count": 1 00:24:54.230 } 00:24:54.230 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1835856 00:24:54.230 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:54.230 [2024-12-06 11:25:57.329910] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:24:54.230 [2024-12-06 11:25:57.329958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835856 ] 00:24:54.230 [2024-12-06 11:25:57.403234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.230 [2024-12-06 11:25:57.441814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.230 Running I/O for 90 seconds... 00:24:54.230 12319.00 IOPS, 48.12 MiB/s [2024-12-06T10:26:27.168Z] 12401.50 IOPS, 48.44 MiB/s [2024-12-06T10:26:27.168Z] 12403.00 IOPS, 48.45 MiB/s [2024-12-06T10:26:27.168Z] 12470.75 IOPS, 48.71 MiB/s [2024-12-06T10:26:27.168Z] 12513.00 IOPS, 48.88 MiB/s [2024-12-06T10:26:27.168Z] 12544.50 IOPS, 49.00 MiB/s [2024-12-06T10:26:27.168Z] 12542.71 IOPS, 48.99 MiB/s [2024-12-06T10:26:27.168Z] 12557.00 IOPS, 49.05 MiB/s [2024-12-06T10:26:27.168Z] 12554.78 IOPS, 49.04 MiB/s [2024-12-06T10:26:27.168Z] 12547.40 IOPS, 49.01 MiB/s [2024-12-06T10:26:27.168Z] 12529.73 IOPS, 48.94 MiB/s [2024-12-06T10:26:27.168Z] 12514.42 IOPS, 48.88 MiB/s [2024-12-06T10:26:27.168Z] [2024-12-06 11:26:11.634781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.230 [2024-12-06 11:26:11.634818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:54.230 [2024-12-06 11:26:11.634855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.230 [2024-12-06 11:26:11.634863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:54.230 [2024-12-06 11:26:11.634876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.230 [2024-12-06 11:26:11.634882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:54.230 [2024-12-06 11:26:11.634895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.230 [2024-12-06 11:26:11.634901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:54.230 [2024-12-06 11:26:11.634913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.230 [2024-12-06 11:26:11.634920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:54.230 [2024-12-06 11:26:11.634931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.230 [2024-12-06 11:26:11.634938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:54.230 [2024-12-06 11:26:11.634950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.230 [2024-12-06 11:26:11.634957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:54.230 [2024-12-06 11:26:11.634968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.634974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.634985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.634991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.635982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.635995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.636001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.636014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.636020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.636033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.636040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.636053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.636065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.636078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.636085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:54.231 [2024-12-06 11:26:11.636097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.231 [2024-12-06 11:26:11.636103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.636630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.636636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:54.232 [2024-12-06 11:26:11.637315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.232 [2024-12-06 11:26:11.637321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.637985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.637992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.233 [2024-12-06 11:26:11.638270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.233 [2024-12-06 11:26:11.638286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:11.638292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:11.638310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:11.638317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:11.638333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:11.638339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:11.638355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:11.638361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:11.638377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:11.638384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:11.638400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:11.638406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:54.234 11948.00 IOPS, 46.67 MiB/s [2024-12-06T10:26:27.172Z] 11094.57 IOPS, 43.34 MiB/s [2024-12-06T10:26:27.172Z] 10354.93 IOPS, 40.45 MiB/s [2024-12-06T10:26:27.172Z] 10159.88 IOPS, 39.69 MiB/s [2024-12-06T10:26:27.172Z] 10296.88 IOPS, 40.22 MiB/s [2024-12-06T10:26:27.172Z] 10464.06 IOPS, 40.88 MiB/s [2024-12-06T10:26:27.172Z] 10679.74 IOPS, 41.72 MiB/s [2024-12-06T10:26:27.172Z] 10868.30 IOPS, 42.45 MiB/s [2024-12-06T10:26:27.172Z] 10959.19 IOPS, 42.81 MiB/s [2024-12-06T10:26:27.172Z] 11018.23 IOPS, 43.04 MiB/s [2024-12-06T10:26:27.172Z] 11086.04 IOPS, 43.30 MiB/s [2024-12-06T10:26:27.172Z] 11233.92 IOPS, 43.88 MiB/s [2024-12-06T10:26:27.172Z] 11360.08 IOPS, 44.38 MiB/s [2024-12-06T10:26:27.172Z] [2024-12-06 11:26:24.706558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.706897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.234 [2024-12-06 11:26:24.706918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.234 [2024-12-06 11:26:24.706936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.234 [2024-12-06 11:26:24.706954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.234 [2024-12-06 11:26:24.706972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.706983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.234 [2024-12-06 11:26:24.706990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.707001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.707008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.707019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.707025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.707035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.707041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.707052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.707063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.707075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.707081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.707110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.234 [2024-12-06 11:26:24.707117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:54.234 [2024-12-06 11:26:24.707128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.234 [2024-12-06 11:26:24.707135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.707146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.235 [2024-12-06 11:26:24.707153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.707164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.707170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.707182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.707188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.707200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.235 [2024-12-06 11:26:24.707206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.708978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.708989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.235 [2024-12-06 11:26:24.708995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.709006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.235 [2024-12-06 11:26:24.709013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.709025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.235 [2024-12-06 11:26:24.709031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.709305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.235 [2024-12-06 11:26:24.709316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.709329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.235 [2024-12-06 11:26:24.709336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:54.235 [2024-12-06 11:26:24.709347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.235 [2024-12-06 11:26:24.709354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.709876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.709888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.709894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.710484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.710500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.710514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.710521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.710532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.710539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.710551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.710558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.710570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.710576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.710587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.236 [2024-12-06 11:26:24.710594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.710607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.710613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:54.236 [2024-12-06 11:26:24.710624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.236 [2024-12-06 11:26:24.710630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.710648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.710668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.710686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.710704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.710722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.710739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.710758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.710776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.710800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.710972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.710979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.711823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.711838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.711852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.711859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.711870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.711877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.711888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.711894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.711906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.711913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.711927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.711933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.711944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.711952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.711966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.711973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.711984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.711990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.712008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.712027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.712045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.712068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.712086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.712106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.712124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.237 [2024-12-06 11:26:24.712142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.712159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.237 [2024-12-06 11:26:24.712177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:54.237 [2024-12-06 11:26:24.712188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.712197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.712209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.712215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.712968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.712982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.712996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.713979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.713990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.238 [2024-12-06 11:26:24.713996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.714009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.714016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.714027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.238 [2024-12-06 11:26:24.714034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:54.238 [2024-12-06 11:26:24.714045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.714051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.714068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.714075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.714086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.714092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.714104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.714110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.715221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.715241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.715259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.715278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.715295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.715312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.715402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.715448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.715456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.716104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.716125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.716143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.716160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.716179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.239 [2024-12-06 11:26:24.716196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.716214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.716231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.716248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.716265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.239 [2024-12-06 11:26:24.716285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:54.239 [2024-12-06 11:26:24.716296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.716323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.716359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.716377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.716450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.716468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.716485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.716503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.716520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.716540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.716641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.716648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.726285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.726304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.726325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.726343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.726361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.726381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.726399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.726417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.726435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.726452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.726464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.240 [2024-12-06 11:26:24.726470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.728788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.728806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.728821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.728828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.728840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.728847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.728858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.728864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.728876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.728883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.728894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.728901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.728912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.240 [2024-12-06 11:26:24.728922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:54.240 [2024-12-06 11:26:24.728933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.728940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.728952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.728958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.728969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.728976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.728987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.728994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.729525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.729686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.241 [2024-12-06 11:26:24.729695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.730505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.730521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.730539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.730548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.730565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.730574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.730592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.730602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.730616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.730626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.730641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.241 [2024-12-06 11:26:24.730649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:54.241 [2024-12-06 11:26:24.730665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.730674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.730690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.730699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.730714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.730723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.730739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.730748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.730763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.730772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.730789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.730797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.731175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.731536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.731547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.732861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.732879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.732896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.732906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.732922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.732932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.732947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.732956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.732971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.732980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.732997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.733007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.733031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.733056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.733085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.733109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.733134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.242 [2024-12-06 11:26:24.733160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.733189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.733214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.733239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.733263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:54.242 [2024-12-06 11:26:24.733279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.242 [2024-12-06 11:26:24.733288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.733633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.733823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.733832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.735334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.735620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.735646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.735672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.243 [2024-12-06 11:26:24.735723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:54.243 [2024-12-06 11:26:24.735740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.243 [2024-12-06 11:26:24.735749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.735765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.735774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.735789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.735798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.735813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.735823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.735838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.735847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.735864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.735872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.735890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.735900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.735917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.735926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.735942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.735950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.737125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.737150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.737227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.737252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.737277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.737302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.737596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.244 [2024-12-06 11:26:24.737605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.738212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.738230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.738248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.738257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.738274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.738283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.738299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.738308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:54.244 [2024-12-06 11:26:24.738324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.244 [2024-12-06 11:26:24.738334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.738360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.738385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.738410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.738435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.738459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.738483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.738508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.738535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.738561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.738586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.738611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.738637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.738975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.738989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.245 [2024-12-06 11:26:24.739407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.739549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.739558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.740358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.740374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.740388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.740395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.740407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.245 [2024-12-06 11:26:24.740414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:54.245 [2024-12-06 11:26:24.740426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.740803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.740854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.740861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.742535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.742553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.742571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.742588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.246 [2024-12-06 11:26:24.742607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.246 [2024-12-06 11:26:24.742624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:54.246 [2024-12-06 11:26:24.742636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.742642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.742653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.742660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.742672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.742681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.742692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.742698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.742715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.742722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.742734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.742741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.742753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.742759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.743297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.743410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.743428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.743447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.743469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.743487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.743561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.743579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.743597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.743665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.743671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.744556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.744576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.744596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.744613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.744633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.744651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.744668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.247 [2024-12-06 11:26:24.744686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.744704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.744722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.247 [2024-12-06 11:26:24.744740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:54.247 [2024-12-06 11:26:24.744751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.744757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.744778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.744797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.744815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.744833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.744853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.744871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.744893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.744911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.744930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.744947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.744965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.744983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.744994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.745000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.745013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.745020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.745031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.745037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.745048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.745055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.745072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.745078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.745090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.745096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.745107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.745114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.746608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.746629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.746648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.746667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.746685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.746704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.746725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.746745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.746763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.746775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.746783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.747102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.248 [2024-12-06 11:26:24.747115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.747129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.747135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.747147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.747155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.747167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.248 [2024-12-06 11:26:24.747173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:54.248 [2024-12-06 11:26:24.747185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.747210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.747247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.747269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.747305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.747376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.747429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.747475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.747483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.249 [2024-12-06 11:26:24.748551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:54.249 [2024-12-06 11:26:24.748617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.249 [2024-12-06 11:26:24.748623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.748635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.748642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.748653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.748659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.748674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.748681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.748693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.748699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.748710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.748717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.748729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.748735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.749340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.749360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.749453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.749466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.749472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.750179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.750201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.750220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.750239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.750256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.750274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.750295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.750314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.750333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.750352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.750370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.750388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.750406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.750424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.750436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.750442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.751584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.751600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.751613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.250 [2024-12-06 11:26:24.751621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.751632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.751639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:54.250 [2024-12-06 11:26:24.751651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.250 [2024-12-06 11:26:24.751661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.751716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.751734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.751753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.751790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.751826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.751935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.751988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.751999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.752044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.752066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.752085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.752103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.752215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.752233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.752250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.752317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.752324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.754204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.754225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.754240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.251 [2024-12-06 11:26:24.754247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:54.251 [2024-12-06 11:26:24.754258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.251 [2024-12-06 11:26:24.754265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.754521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.754540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.754557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.754574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.754592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.754603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.754610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.755147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.755317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.755335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.755388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.252 [2024-12-06 11:26:24.755407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:54.252 [2024-12-06 11:26:24.755458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.252 [2024-12-06 11:26:24.755465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.755483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.755500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.755520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.755538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.755555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.755572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.755590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.755608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.755626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.755639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.755646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.756077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.756097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.756188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.756337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.756355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.756373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.253 [2024-12-06 11:26:24.756391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.756465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.756472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.757469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.757484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.757498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.757510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:54.253 [2024-12-06 11:26:24.757523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.253 [2024-12-06 11:26:24.757529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.757566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.757809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.757846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.757919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.757937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.757957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.757988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.757995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.758013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.758032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.758050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.758073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.758092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.758110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.758128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.758146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.758165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.758182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.254 [2024-12-06 11:26:24.758201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.758221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.758233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.254 [2024-12-06 11:26:24.758240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:54.254 [2024-12-06 11:26:24.760144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.255 [2024-12-06 11:26:24.760839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:54.255 [2024-12-06 11:26:24.760868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.255 [2024-12-06 11:26:24.760874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:54.256 [2024-12-06 11:26:24.760885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.256 [2024-12-06 11:26:24.760893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:54.256 [2024-12-06 11:26:24.760904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.256 [2024-12-06 11:26:24.760911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:54.256 11424.69 IOPS, 44.63 MiB/s [2024-12-06T10:26:27.194Z] 11460.81 IOPS, 44.77 MiB/s [2024-12-06T10:26:27.194Z] Received shutdown signal, test time was about 27.664452 seconds 00:24:54.256 00:24:54.256 Latency(us) 00:24:54.256 [2024-12-06T10:26:27.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.256 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:54.256 Verification LBA range: start 0x0 length 0x4000 00:24:54.256 Nvme0n1 : 27.66 11483.24 44.86 0.00 0.00 11128.47 781.96 3019898.88 00:24:54.256 [2024-12-06T10:26:27.194Z] =================================================================================================================== 00:24:54.256 [2024-12-06T10:26:27.194Z] Total : 11483.24 44.86 0.00 0.00 11128.47 781.96 3019898.88 00:24:54.256 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.515 rmmod nvme_tcp 00:24:54.515 rmmod nvme_fabrics 00:24:54.515 rmmod nvme_keyring 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1835559 ']' 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1835559 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1835559 ']' 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1835559 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1835559 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1835559' 00:24:54.515 killing process with pid 1835559 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1835559 00:24:54.515 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1835559 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.775 11:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.312 00:24:57.312 real 0m40.608s 00:24:57.312 user 1m48.577s 00:24:57.312 sys 0m11.377s 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:57.312 ************************************ 00:24:57.312 END TEST nvmf_host_multipath_status 00:24:57.312 ************************************ 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.312 ************************************ 00:24:57.312 START TEST nvmf_discovery_remove_ifc 00:24:57.312 ************************************ 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:57.312 * Looking for test storage... 00:24:57.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.312 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:57.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.312 --rc genhtml_branch_coverage=1 00:24:57.312 --rc genhtml_function_coverage=1 00:24:57.312 --rc genhtml_legend=1 00:24:57.312 --rc geninfo_all_blocks=1 00:24:57.313 --rc geninfo_unexecuted_blocks=1 00:24:57.313 00:24:57.313 ' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.313 --rc genhtml_branch_coverage=1 00:24:57.313 --rc genhtml_function_coverage=1 00:24:57.313 --rc genhtml_legend=1 00:24:57.313 --rc geninfo_all_blocks=1 00:24:57.313 --rc geninfo_unexecuted_blocks=1 00:24:57.313 00:24:57.313 ' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.313 --rc genhtml_branch_coverage=1 00:24:57.313 --rc genhtml_function_coverage=1 00:24:57.313 --rc genhtml_legend=1 00:24:57.313 --rc geninfo_all_blocks=1 00:24:57.313 --rc geninfo_unexecuted_blocks=1 00:24:57.313 00:24:57.313 ' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.313 --rc genhtml_branch_coverage=1 00:24:57.313 --rc genhtml_function_coverage=1 00:24:57.313 --rc genhtml_legend=1 00:24:57.313 --rc geninfo_all_blocks=1 00:24:57.313 --rc geninfo_unexecuted_blocks=1 00:24:57.313 00:24:57.313 ' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.313 11:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:03.884 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:03.884 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:03.884 Found net devices under 0000:af:00.0: cvl_0_0 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:03.884 Found net devices under 0000:af:00.1: cvl_0_1 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.884 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:03.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:25:03.885 00:25:03.885 --- 10.0.0.2 ping statistics --- 00:25:03.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.885 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:25:03.885 00:25:03.885 --- 10.0.0.1 ping statistics --- 00:25:03.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.885 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1844977 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1844977 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1844977 ']' 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.885 11:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.885 [2024-12-06 11:26:35.920806] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:25:03.885 [2024-12-06 11:26:35.920845] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.885 [2024-12-06 11:26:35.996881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.885 [2024-12-06 11:26:36.032539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.885 [2024-12-06 11:26:36.032571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.885 [2024-12-06 11:26:36.032577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.885 [2024-12-06 11:26:36.032582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.885 [2024-12-06 11:26:36.032586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.885 [2024-12-06 11:26:36.033138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.885 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.885 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:03.885 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:03.885 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.885 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.885 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.885 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:03.885 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.885 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.885 [2024-12-06 11:26:36.780855] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.885 [2024-12-06 11:26:36.789029] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:03.885 null0 00:25:04.144 [2024-12-06 11:26:36.821011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1845049 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1845049 /tmp/host.sock 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1845049 ']' 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:04.144 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.144 11:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.144 [2024-12-06 11:26:36.891531] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:25:04.144 [2024-12-06 11:26:36.891569] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845049 ] 00:25:04.144 [2024-12-06 11:26:36.963308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.144 [2024-12-06 11:26:37.005010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.081 11:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.018 [2024-12-06 11:26:38.849218] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:06.018 [2024-12-06 11:26:38.849238] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:06.018 [2024-12-06 11:26:38.849255] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:06.018 [2024-12-06 11:26:38.935507] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:06.277 [2024-12-06 11:26:39.031337] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:06.277 [2024-12-06 11:26:39.032123] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf709d0:1 started. 00:25:06.277 [2024-12-06 11:26:39.033308] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:06.277 [2024-12-06 11:26:39.033345] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:06.277 [2024-12-06 11:26:39.033364] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:06.277 [2024-12-06 11:26:39.033375] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:06.277 [2024-12-06 11:26:39.033394] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:06.277 [2024-12-06 11:26:39.037968] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf709d0 was disconnected and freed. delete nvme_qpair. 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.277 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:06.536 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.536 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:06.536 11:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:07.474 11:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:08.410 11:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:09.787 11:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:10.723 11:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.659 [2024-12-06 11:26:44.474973] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:11.659 [2024-12-06 11:26:44.475019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.659 [2024-12-06 11:26:44.475030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.659 [2024-12-06 11:26:44.475039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.659 [2024-12-06 11:26:44.475045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.659 [2024-12-06 11:26:44.475051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.659 [2024-12-06 11:26:44.475061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.659 [2024-12-06 11:26:44.475067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.659 [2024-12-06 11:26:44.475073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.659 [2024-12-06 11:26:44.475080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.659 [2024-12-06 11:26:44.475086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.659 [2024-12-06 11:26:44.475092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4d210 is same with the state(6) to be set 00:25:11.659 [2024-12-06 11:26:44.484991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4d210 (9): Bad file descriptor 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:11.659 11:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:11.659 [2024-12-06 11:26:44.495027] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:11.659 [2024-12-06 11:26:44.495039] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:11.659 [2024-12-06 11:26:44.495045] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:11.659 [2024-12-06 11:26:44.495049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:11.659 [2024-12-06 11:26:44.495082] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.594 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:12.594 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.594 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:12.594 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.594 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:12.594 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.594 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:12.852 [2024-12-06 11:26:45.548098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:12.852 [2024-12-06 11:26:45.548176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4d210 with addr=10.0.0.2, port=4420 00:25:12.852 [2024-12-06 11:26:45.548209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4d210 is same with the state(6) to be set 00:25:12.852 [2024-12-06 11:26:45.548258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4d210 (9): Bad file descriptor 00:25:12.852 [2024-12-06 11:26:45.549212] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:12.852 [2024-12-06 11:26:45.549274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.852 [2024-12-06 11:26:45.549299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.852 [2024-12-06 11:26:45.549321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.852 [2024-12-06 11:26:45.549342] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.852 [2024-12-06 11:26:45.549357] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.852 [2024-12-06 11:26:45.549370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.852 [2024-12-06 11:26:45.549391] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.852 [2024-12-06 11:26:45.549406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.852 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.852 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:12.852 11:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:13.788 [2024-12-06 11:26:46.551918] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.788 [2024-12-06 11:26:46.551936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.788 [2024-12-06 11:26:46.551946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.788 [2024-12-06 11:26:46.551952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.788 [2024-12-06 11:26:46.551959] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:13.788 [2024-12-06 11:26:46.551965] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.788 [2024-12-06 11:26:46.551969] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.788 [2024-12-06 11:26:46.551973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.788 [2024-12-06 11:26:46.551990] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:13.788 [2024-12-06 11:26:46.552007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.788 [2024-12-06 11:26:46.552015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.788 [2024-12-06 11:26:46.552023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.788 [2024-12-06 11:26:46.552030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.788 [2024-12-06 11:26:46.552037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.788 [2024-12-06 11:26:46.552043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.788 [2024-12-06 11:26:46.552049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.788 [2024-12-06 11:26:46.552063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.788 [2024-12-06 11:26:46.552070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.788 [2024-12-06 11:26:46.552076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.789 [2024-12-06 11:26:46.552082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:13.789 [2024-12-06 11:26:46.552345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3c930 (9): Bad file descriptor 00:25:13.789 [2024-12-06 11:26:46.553356] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:13.789 [2024-12-06 11:26:46.553366] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.789 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.047 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:14.047 11:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:14.982 11:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.912 [2024-12-06 11:26:48.604519] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:15.912 [2024-12-06 11:26:48.604535] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:15.912 [2024-12-06 11:26:48.604546] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:15.912 [2024-12-06 11:26:48.731927] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:15.912 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:15.912 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.912 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:15.912 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.912 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:15.912 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.912 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:15.912 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.169 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:16.169 11:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:16.169 [2024-12-06 11:26:48.957036] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:16.169 [2024-12-06 11:26:48.957642] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xf56750:1 started. 00:25:16.169 [2024-12-06 11:26:48.958604] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:16.169 [2024-12-06 11:26:48.958634] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:16.169 [2024-12-06 11:26:48.958652] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:16.169 [2024-12-06 11:26:48.958665] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:16.169 [2024-12-06 11:26:48.958671] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:16.169 [2024-12-06 11:26:48.963188] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xf56750 was disconnected and freed. delete nvme_qpair. 00:25:17.101 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1845049 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1845049 ']' 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1845049 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1845049 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1845049' 00:25:17.102 killing process with pid 1845049 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1845049 00:25:17.102 11:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1845049 00:25:17.360 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:17.360 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:17.360 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:17.360 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.360 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:17.360 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.360 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.361 rmmod nvme_tcp 00:25:17.361 rmmod nvme_fabrics 00:25:17.361 rmmod nvme_keyring 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1844977 ']' 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1844977 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1844977 ']' 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1844977 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1844977 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1844977' 00:25:17.361 killing process with pid 1844977 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1844977 00:25:17.361 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1844977 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.619 11:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.525 11:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:19.525 00:25:19.525 real 0m22.750s 00:25:19.525 user 0m28.866s 00:25:19.525 sys 0m5.930s 00:25:19.525 11:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.525 11:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.525 ************************************ 00:25:19.525 END TEST nvmf_discovery_remove_ifc 00:25:19.525 ************************************ 00:25:19.785 11:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:19.785 11:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:19.785 11:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.786 ************************************ 00:25:19.786 START TEST nvmf_identify_kernel_target 00:25:19.786 ************************************ 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:19.786 * Looking for test storage... 00:25:19.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:19.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.786 --rc genhtml_branch_coverage=1 00:25:19.786 --rc genhtml_function_coverage=1 00:25:19.786 --rc genhtml_legend=1 00:25:19.786 --rc geninfo_all_blocks=1 00:25:19.786 --rc geninfo_unexecuted_blocks=1 00:25:19.786 00:25:19.786 ' 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:19.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.786 --rc genhtml_branch_coverage=1 00:25:19.786 --rc genhtml_function_coverage=1 00:25:19.786 --rc genhtml_legend=1 00:25:19.786 --rc geninfo_all_blocks=1 00:25:19.786 --rc geninfo_unexecuted_blocks=1 00:25:19.786 00:25:19.786 ' 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:19.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.786 --rc genhtml_branch_coverage=1 00:25:19.786 --rc genhtml_function_coverage=1 00:25:19.786 --rc genhtml_legend=1 00:25:19.786 --rc geninfo_all_blocks=1 00:25:19.786 --rc geninfo_unexecuted_blocks=1 00:25:19.786 00:25:19.786 ' 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:19.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.786 --rc genhtml_branch_coverage=1 00:25:19.786 --rc genhtml_function_coverage=1 00:25:19.786 --rc genhtml_legend=1 00:25:19.786 --rc geninfo_all_blocks=1 00:25:19.786 --rc geninfo_unexecuted_blocks=1 00:25:19.786 00:25:19.786 ' 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.786 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:19.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.787 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.047 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:20.047 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:20.047 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.047 11:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.616 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.616 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:26.616 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:26.616 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:26.616 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:26.616 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:26.617 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:26.617 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:26.617 Found net devices under 0000:af:00.0: cvl_0_0 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:26.617 Found net devices under 0000:af:00.1: cvl_0_1 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.617 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:26.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:25:26.618 00:25:26.618 --- 10.0.0.2 ping statistics --- 00:25:26.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.618 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:25:26.618 00:25:26.618 --- 10.0.0.1 ping statistics --- 00:25:26.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.618 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:26.618 11:26:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:28.517 Waiting for block devices as requested 00:25:28.776 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:25:28.776 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:29.035 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:29.035 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:29.035 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:29.035 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:29.294 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:29.294 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:29.294 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:29.553 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:29.553 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:29.553 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:29.810 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:29.810 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:29.810 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:29.810 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:30.069 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:30.069 No valid GPT data, bailing 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:30.069 11:27:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:30.328 00:25:30.329 Discovery Log Number of Records 2, Generation counter 2 00:25:30.329 =====Discovery Log Entry 0====== 00:25:30.329 trtype: tcp 00:25:30.329 adrfam: ipv4 00:25:30.329 subtype: current discovery subsystem 00:25:30.329 treq: not specified, sq flow control disable supported 00:25:30.329 portid: 1 00:25:30.329 trsvcid: 4420 00:25:30.329 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:30.329 traddr: 10.0.0.1 00:25:30.329 eflags: none 00:25:30.329 sectype: none 00:25:30.329 =====Discovery Log Entry 1====== 00:25:30.329 trtype: tcp 00:25:30.329 adrfam: ipv4 00:25:30.329 subtype: nvme subsystem 00:25:30.329 treq: not specified, sq flow control disable supported 00:25:30.329 portid: 1 00:25:30.329 trsvcid: 4420 00:25:30.329 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:30.329 traddr: 10.0.0.1 00:25:30.329 eflags: none 00:25:30.329 sectype: none 00:25:30.329 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:30.329 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:30.329 ===================================================== 00:25:30.329 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:30.329 ===================================================== 00:25:30.329 Controller Capabilities/Features 00:25:30.329 ================================ 00:25:30.329 Vendor ID: 0000 00:25:30.329 Subsystem Vendor ID: 0000 00:25:30.329 Serial Number: a0fcec3feb1f66320e73 00:25:30.329 Model Number: Linux 00:25:30.329 Firmware Version: 6.8.9-20 00:25:30.329 Recommended Arb Burst: 0 00:25:30.329 IEEE OUI Identifier: 00 00 00 00:25:30.329 Multi-path I/O 00:25:30.329 May have multiple subsystem ports: No 00:25:30.329 May have multiple controllers: No 00:25:30.329 Associated with SR-IOV VF: No 00:25:30.329 Max Data Transfer Size: Unlimited 00:25:30.329 Max Number of Namespaces: 0 00:25:30.329 Max Number of I/O Queues: 1024 00:25:30.329 NVMe Specification Version (VS): 1.3 00:25:30.329 NVMe Specification Version (Identify): 1.3 00:25:30.329 Maximum Queue Entries: 1024 00:25:30.329 Contiguous Queues Required: No 00:25:30.329 Arbitration Mechanisms Supported 00:25:30.329 Weighted Round Robin: Not Supported 00:25:30.329 Vendor Specific: Not Supported 00:25:30.329 Reset Timeout: 7500 ms 00:25:30.329 Doorbell Stride: 4 bytes 00:25:30.329 NVM Subsystem Reset: Not Supported 00:25:30.329 Command Sets Supported 00:25:30.329 NVM Command Set: Supported 00:25:30.329 Boot Partition: Not Supported 00:25:30.329 Memory Page Size Minimum: 4096 bytes 00:25:30.329 Memory Page Size Maximum: 4096 bytes 00:25:30.329 Persistent Memory Region: Not Supported 00:25:30.329 Optional Asynchronous Events Supported 00:25:30.329 Namespace Attribute Notices: Not Supported 00:25:30.329 Firmware Activation Notices: Not Supported 00:25:30.329 ANA Change Notices: Not Supported 00:25:30.329 PLE Aggregate Log Change Notices: Not Supported 00:25:30.329 LBA Status Info Alert Notices: Not Supported 00:25:30.329 EGE Aggregate Log Change Notices: Not Supported 00:25:30.329 Normal NVM Subsystem Shutdown event: Not Supported 00:25:30.329 Zone Descriptor Change Notices: Not Supported 00:25:30.329 Discovery Log Change Notices: Supported 00:25:30.329 Controller Attributes 00:25:30.329 128-bit Host Identifier: Not Supported 00:25:30.329 Non-Operational Permissive Mode: Not Supported 00:25:30.329 NVM Sets: Not Supported 00:25:30.329 Read Recovery Levels: Not Supported 00:25:30.329 Endurance Groups: Not Supported 00:25:30.329 Predictable Latency Mode: Not Supported 00:25:30.329 Traffic Based Keep ALive: Not Supported 00:25:30.329 Namespace Granularity: Not Supported 00:25:30.329 SQ Associations: Not Supported 00:25:30.329 UUID List: Not Supported 00:25:30.329 Multi-Domain Subsystem: Not Supported 00:25:30.329 Fixed Capacity Management: Not Supported 00:25:30.329 Variable Capacity Management: Not Supported 00:25:30.329 Delete Endurance Group: Not Supported 00:25:30.329 Delete NVM Set: Not Supported 00:25:30.329 Extended LBA Formats Supported: Not Supported 00:25:30.329 Flexible Data Placement Supported: Not Supported 00:25:30.329 00:25:30.329 Controller Memory Buffer Support 00:25:30.329 ================================ 00:25:30.329 Supported: No 00:25:30.329 00:25:30.329 Persistent Memory Region Support 00:25:30.329 ================================ 00:25:30.329 Supported: No 00:25:30.329 00:25:30.329 Admin Command Set Attributes 00:25:30.329 ============================ 00:25:30.329 Security Send/Receive: Not Supported 00:25:30.329 Format NVM: Not Supported 00:25:30.329 Firmware Activate/Download: Not Supported 00:25:30.329 Namespace Management: Not Supported 00:25:30.329 Device Self-Test: Not Supported 00:25:30.329 Directives: Not Supported 00:25:30.329 NVMe-MI: Not Supported 00:25:30.329 Virtualization Management: Not Supported 00:25:30.329 Doorbell Buffer Config: Not Supported 00:25:30.329 Get LBA Status Capability: Not Supported 00:25:30.329 Command & Feature Lockdown Capability: Not Supported 00:25:30.329 Abort Command Limit: 1 00:25:30.329 Async Event Request Limit: 1 00:25:30.329 Number of Firmware Slots: N/A 00:25:30.329 Firmware Slot 1 Read-Only: N/A 00:25:30.329 Firmware Activation Without Reset: N/A 00:25:30.329 Multiple Update Detection Support: N/A 00:25:30.329 Firmware Update Granularity: No Information Provided 00:25:30.329 Per-Namespace SMART Log: No 00:25:30.329 Asymmetric Namespace Access Log Page: Not Supported 00:25:30.329 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:30.329 Command Effects Log Page: Not Supported 00:25:30.329 Get Log Page Extended Data: Supported 00:25:30.329 Telemetry Log Pages: Not Supported 00:25:30.329 Persistent Event Log Pages: Not Supported 00:25:30.329 Supported Log Pages Log Page: May Support 00:25:30.329 Commands Supported & Effects Log Page: Not Supported 00:25:30.329 Feature Identifiers & Effects Log Page:May Support 00:25:30.329 NVMe-MI Commands & Effects Log Page: May Support 00:25:30.329 Data Area 4 for Telemetry Log: Not Supported 00:25:30.329 Error Log Page Entries Supported: 1 00:25:30.329 Keep Alive: Not Supported 00:25:30.329 00:25:30.329 NVM Command Set Attributes 00:25:30.329 ========================== 00:25:30.329 Submission Queue Entry Size 00:25:30.329 Max: 1 00:25:30.329 Min: 1 00:25:30.329 Completion Queue Entry Size 00:25:30.329 Max: 1 00:25:30.329 Min: 1 00:25:30.329 Number of Namespaces: 0 00:25:30.329 Compare Command: Not Supported 00:25:30.329 Write Uncorrectable Command: Not Supported 00:25:30.329 Dataset Management Command: Not Supported 00:25:30.329 Write Zeroes Command: Not Supported 00:25:30.329 Set Features Save Field: Not Supported 00:25:30.329 Reservations: Not Supported 00:25:30.329 Timestamp: Not Supported 00:25:30.329 Copy: Not Supported 00:25:30.329 Volatile Write Cache: Not Present 00:25:30.329 Atomic Write Unit (Normal): 1 00:25:30.329 Atomic Write Unit (PFail): 1 00:25:30.329 Atomic Compare & Write Unit: 1 00:25:30.329 Fused Compare & Write: Not Supported 00:25:30.329 Scatter-Gather List 00:25:30.329 SGL Command Set: Supported 00:25:30.329 SGL Keyed: Not Supported 00:25:30.329 SGL Bit Bucket Descriptor: Not Supported 00:25:30.329 SGL Metadata Pointer: Not Supported 00:25:30.329 Oversized SGL: Not Supported 00:25:30.329 SGL Metadata Address: Not Supported 00:25:30.329 SGL Offset: Supported 00:25:30.329 Transport SGL Data Block: Not Supported 00:25:30.329 Replay Protected Memory Block: Not Supported 00:25:30.329 00:25:30.329 Firmware Slot Information 00:25:30.329 ========================= 00:25:30.329 Active slot: 0 00:25:30.329 00:25:30.329 00:25:30.329 Error Log 00:25:30.329 ========= 00:25:30.330 00:25:30.330 Active Namespaces 00:25:30.330 ================= 00:25:30.330 Discovery Log Page 00:25:30.330 ================== 00:25:30.330 Generation Counter: 2 00:25:30.330 Number of Records: 2 00:25:30.330 Record Format: 0 00:25:30.330 00:25:30.330 Discovery Log Entry 0 00:25:30.330 ---------------------- 00:25:30.330 Transport Type: 3 (TCP) 00:25:30.330 Address Family: 1 (IPv4) 00:25:30.330 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:30.330 Entry Flags: 00:25:30.330 Duplicate Returned Information: 0 00:25:30.330 Explicit Persistent Connection Support for Discovery: 0 00:25:30.330 Transport Requirements: 00:25:30.330 Secure Channel: Not Specified 00:25:30.330 Port ID: 1 (0x0001) 00:25:30.330 Controller ID: 65535 (0xffff) 00:25:30.330 Admin Max SQ Size: 32 00:25:30.330 Transport Service Identifier: 4420 00:25:30.330 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:30.330 Transport Address: 10.0.0.1 00:25:30.330 Discovery Log Entry 1 00:25:30.330 ---------------------- 00:25:30.330 Transport Type: 3 (TCP) 00:25:30.330 Address Family: 1 (IPv4) 00:25:30.330 Subsystem Type: 2 (NVM Subsystem) 00:25:30.330 Entry Flags: 00:25:30.330 Duplicate Returned Information: 0 00:25:30.330 Explicit Persistent Connection Support for Discovery: 0 00:25:30.330 Transport Requirements: 00:25:30.330 Secure Channel: Not Specified 00:25:30.330 Port ID: 1 (0x0001) 00:25:30.330 Controller ID: 65535 (0xffff) 00:25:30.330 Admin Max SQ Size: 32 00:25:30.330 Transport Service Identifier: 4420 00:25:30.330 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:30.330 Transport Address: 10.0.0.1 00:25:30.330 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:30.330 get_feature(0x01) failed 00:25:30.330 get_feature(0x02) failed 00:25:30.330 get_feature(0x04) failed 00:25:30.330 ===================================================== 00:25:30.330 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:30.330 ===================================================== 00:25:30.330 Controller Capabilities/Features 00:25:30.330 ================================ 00:25:30.330 Vendor ID: 0000 00:25:30.330 Subsystem Vendor ID: 0000 00:25:30.330 Serial Number: 930e0addf1111cc07d5c 00:25:30.330 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:30.330 Firmware Version: 6.8.9-20 00:25:30.330 Recommended Arb Burst: 6 00:25:30.330 IEEE OUI Identifier: 00 00 00 00:25:30.330 Multi-path I/O 00:25:30.330 May have multiple subsystem ports: Yes 00:25:30.330 May have multiple controllers: Yes 00:25:30.330 Associated with SR-IOV VF: No 00:25:30.330 Max Data Transfer Size: Unlimited 00:25:30.330 Max Number of Namespaces: 1024 00:25:30.330 Max Number of I/O Queues: 128 00:25:30.330 NVMe Specification Version (VS): 1.3 00:25:30.330 NVMe Specification Version (Identify): 1.3 00:25:30.330 Maximum Queue Entries: 1024 00:25:30.330 Contiguous Queues Required: No 00:25:30.330 Arbitration Mechanisms Supported 00:25:30.330 Weighted Round Robin: Not Supported 00:25:30.330 Vendor Specific: Not Supported 00:25:30.330 Reset Timeout: 7500 ms 00:25:30.330 Doorbell Stride: 4 bytes 00:25:30.330 NVM Subsystem Reset: Not Supported 00:25:30.330 Command Sets Supported 00:25:30.330 NVM Command Set: Supported 00:25:30.330 Boot Partition: Not Supported 00:25:30.330 Memory Page Size Minimum: 4096 bytes 00:25:30.330 Memory Page Size Maximum: 4096 bytes 00:25:30.330 Persistent Memory Region: Not Supported 00:25:30.330 Optional Asynchronous Events Supported 00:25:30.330 Namespace Attribute Notices: Supported 00:25:30.330 Firmware Activation Notices: Not Supported 00:25:30.330 ANA Change Notices: Supported 00:25:30.330 PLE Aggregate Log Change Notices: Not Supported 00:25:30.330 LBA Status Info Alert Notices: Not Supported 00:25:30.330 EGE Aggregate Log Change Notices: Not Supported 00:25:30.330 Normal NVM Subsystem Shutdown event: Not Supported 00:25:30.330 Zone Descriptor Change Notices: Not Supported 00:25:30.330 Discovery Log Change Notices: Not Supported 00:25:30.330 Controller Attributes 00:25:30.330 128-bit Host Identifier: Supported 00:25:30.330 Non-Operational Permissive Mode: Not Supported 00:25:30.330 NVM Sets: Not Supported 00:25:30.330 Read Recovery Levels: Not Supported 00:25:30.330 Endurance Groups: Not Supported 00:25:30.330 Predictable Latency Mode: Not Supported 00:25:30.330 Traffic Based Keep ALive: Supported 00:25:30.330 Namespace Granularity: Not Supported 00:25:30.330 SQ Associations: Not Supported 00:25:30.330 UUID List: Not Supported 00:25:30.330 Multi-Domain Subsystem: Not Supported 00:25:30.330 Fixed Capacity Management: Not Supported 00:25:30.330 Variable Capacity Management: Not Supported 00:25:30.330 Delete Endurance Group: Not Supported 00:25:30.330 Delete NVM Set: Not Supported 00:25:30.330 Extended LBA Formats Supported: Not Supported 00:25:30.330 Flexible Data Placement Supported: Not Supported 00:25:30.330 00:25:30.330 Controller Memory Buffer Support 00:25:30.330 ================================ 00:25:30.330 Supported: No 00:25:30.330 00:25:30.330 Persistent Memory Region Support 00:25:30.330 ================================ 00:25:30.330 Supported: No 00:25:30.330 00:25:30.330 Admin Command Set Attributes 00:25:30.330 ============================ 00:25:30.330 Security Send/Receive: Not Supported 00:25:30.330 Format NVM: Not Supported 00:25:30.330 Firmware Activate/Download: Not Supported 00:25:30.330 Namespace Management: Not Supported 00:25:30.330 Device Self-Test: Not Supported 00:25:30.330 Directives: Not Supported 00:25:30.330 NVMe-MI: Not Supported 00:25:30.330 Virtualization Management: Not Supported 00:25:30.330 Doorbell Buffer Config: Not Supported 00:25:30.330 Get LBA Status Capability: Not Supported 00:25:30.330 Command & Feature Lockdown Capability: Not Supported 00:25:30.330 Abort Command Limit: 4 00:25:30.330 Async Event Request Limit: 4 00:25:30.330 Number of Firmware Slots: N/A 00:25:30.330 Firmware Slot 1 Read-Only: N/A 00:25:30.330 Firmware Activation Without Reset: N/A 00:25:30.330 Multiple Update Detection Support: N/A 00:25:30.330 Firmware Update Granularity: No Information Provided 00:25:30.330 Per-Namespace SMART Log: Yes 00:25:30.330 Asymmetric Namespace Access Log Page: Supported 00:25:30.330 ANA Transition Time : 10 sec 00:25:30.330 00:25:30.330 Asymmetric Namespace Access Capabilities 00:25:30.330 ANA Optimized State : Supported 00:25:30.330 ANA Non-Optimized State : Supported 00:25:30.330 ANA Inaccessible State : Supported 00:25:30.330 ANA Persistent Loss State : Supported 00:25:30.330 ANA Change State : Supported 00:25:30.330 ANAGRPID is not changed : No 00:25:30.330 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:30.330 00:25:30.330 ANA Group Identifier Maximum : 128 00:25:30.330 Number of ANA Group Identifiers : 128 00:25:30.330 Max Number of Allowed Namespaces : 1024 00:25:30.330 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:30.330 Command Effects Log Page: Supported 00:25:30.330 Get Log Page Extended Data: Supported 00:25:30.330 Telemetry Log Pages: Not Supported 00:25:30.330 Persistent Event Log Pages: Not Supported 00:25:30.330 Supported Log Pages Log Page: May Support 00:25:30.330 Commands Supported & Effects Log Page: Not Supported 00:25:30.330 Feature Identifiers & Effects Log Page:May Support 00:25:30.330 NVMe-MI Commands & Effects Log Page: May Support 00:25:30.330 Data Area 4 for Telemetry Log: Not Supported 00:25:30.330 Error Log Page Entries Supported: 128 00:25:30.330 Keep Alive: Supported 00:25:30.330 Keep Alive Granularity: 1000 ms 00:25:30.330 00:25:30.330 NVM Command Set Attributes 00:25:30.330 ========================== 00:25:30.330 Submission Queue Entry Size 00:25:30.331 Max: 64 00:25:30.331 Min: 64 00:25:30.331 Completion Queue Entry Size 00:25:30.331 Max: 16 00:25:30.331 Min: 16 00:25:30.331 Number of Namespaces: 1024 00:25:30.331 Compare Command: Not Supported 00:25:30.331 Write Uncorrectable Command: Not Supported 00:25:30.331 Dataset Management Command: Supported 00:25:30.331 Write Zeroes Command: Supported 00:25:30.331 Set Features Save Field: Not Supported 00:25:30.331 Reservations: Not Supported 00:25:30.331 Timestamp: Not Supported 00:25:30.331 Copy: Not Supported 00:25:30.331 Volatile Write Cache: Present 00:25:30.331 Atomic Write Unit (Normal): 1 00:25:30.331 Atomic Write Unit (PFail): 1 00:25:30.331 Atomic Compare & Write Unit: 1 00:25:30.331 Fused Compare & Write: Not Supported 00:25:30.331 Scatter-Gather List 00:25:30.331 SGL Command Set: Supported 00:25:30.331 SGL Keyed: Not Supported 00:25:30.331 SGL Bit Bucket Descriptor: Not Supported 00:25:30.331 SGL Metadata Pointer: Not Supported 00:25:30.331 Oversized SGL: Not Supported 00:25:30.331 SGL Metadata Address: Not Supported 00:25:30.331 SGL Offset: Supported 00:25:30.331 Transport SGL Data Block: Not Supported 00:25:30.331 Replay Protected Memory Block: Not Supported 00:25:30.331 00:25:30.331 Firmware Slot Information 00:25:30.331 ========================= 00:25:30.331 Active slot: 0 00:25:30.331 00:25:30.331 Asymmetric Namespace Access 00:25:30.331 =========================== 00:25:30.331 Change Count : 0 00:25:30.331 Number of ANA Group Descriptors : 1 00:25:30.331 ANA Group Descriptor : 0 00:25:30.331 ANA Group ID : 1 00:25:30.331 Number of NSID Values : 1 00:25:30.331 Change Count : 0 00:25:30.331 ANA State : 1 00:25:30.331 Namespace Identifier : 1 00:25:30.331 00:25:30.331 Commands Supported and Effects 00:25:30.331 ============================== 00:25:30.331 Admin Commands 00:25:30.331 -------------- 00:25:30.331 Get Log Page (02h): Supported 00:25:30.331 Identify (06h): Supported 00:25:30.331 Abort (08h): Supported 00:25:30.331 Set Features (09h): Supported 00:25:30.331 Get Features (0Ah): Supported 00:25:30.331 Asynchronous Event Request (0Ch): Supported 00:25:30.331 Keep Alive (18h): Supported 00:25:30.331 I/O Commands 00:25:30.331 ------------ 00:25:30.331 Flush (00h): Supported 00:25:30.331 Write (01h): Supported LBA-Change 00:25:30.331 Read (02h): Supported 00:25:30.331 Write Zeroes (08h): Supported LBA-Change 00:25:30.331 Dataset Management (09h): Supported 00:25:30.331 00:25:30.331 Error Log 00:25:30.331 ========= 00:25:30.331 Entry: 0 00:25:30.331 Error Count: 0x3 00:25:30.331 Submission Queue Id: 0x0 00:25:30.331 Command Id: 0x5 00:25:30.331 Phase Bit: 0 00:25:30.331 Status Code: 0x2 00:25:30.331 Status Code Type: 0x0 00:25:30.331 Do Not Retry: 1 00:25:30.331 Error Location: 0x28 00:25:30.331 LBA: 0x0 00:25:30.331 Namespace: 0x0 00:25:30.331 Vendor Log Page: 0x0 00:25:30.331 ----------- 00:25:30.331 Entry: 1 00:25:30.331 Error Count: 0x2 00:25:30.331 Submission Queue Id: 0x0 00:25:30.331 Command Id: 0x5 00:25:30.331 Phase Bit: 0 00:25:30.331 Status Code: 0x2 00:25:30.331 Status Code Type: 0x0 00:25:30.331 Do Not Retry: 1 00:25:30.331 Error Location: 0x28 00:25:30.331 LBA: 0x0 00:25:30.331 Namespace: 0x0 00:25:30.331 Vendor Log Page: 0x0 00:25:30.331 ----------- 00:25:30.331 Entry: 2 00:25:30.331 Error Count: 0x1 00:25:30.331 Submission Queue Id: 0x0 00:25:30.331 Command Id: 0x4 00:25:30.331 Phase Bit: 0 00:25:30.331 Status Code: 0x2 00:25:30.331 Status Code Type: 0x0 00:25:30.331 Do Not Retry: 1 00:25:30.331 Error Location: 0x28 00:25:30.331 LBA: 0x0 00:25:30.331 Namespace: 0x0 00:25:30.331 Vendor Log Page: 0x0 00:25:30.331 00:25:30.331 Number of Queues 00:25:30.331 ================ 00:25:30.331 Number of I/O Submission Queues: 128 00:25:30.331 Number of I/O Completion Queues: 128 00:25:30.331 00:25:30.331 ZNS Specific Controller Data 00:25:30.331 ============================ 00:25:30.331 Zone Append Size Limit: 0 00:25:30.331 00:25:30.331 00:25:30.331 Active Namespaces 00:25:30.331 ================= 00:25:30.331 get_feature(0x05) failed 00:25:30.331 Namespace ID:1 00:25:30.331 Command Set Identifier: NVM (00h) 00:25:30.331 Deallocate: Supported 00:25:30.331 Deallocated/Unwritten Error: Not Supported 00:25:30.331 Deallocated Read Value: Unknown 00:25:30.331 Deallocate in Write Zeroes: Not Supported 00:25:30.331 Deallocated Guard Field: 0xFFFF 00:25:30.331 Flush: Supported 00:25:30.331 Reservation: Not Supported 00:25:30.331 Namespace Sharing Capabilities: Multiple Controllers 00:25:30.331 Size (in LBAs): 1953525168 (931GiB) 00:25:30.331 Capacity (in LBAs): 1953525168 (931GiB) 00:25:30.331 Utilization (in LBAs): 1953525168 (931GiB) 00:25:30.331 UUID: 7407940d-1c8e-4e24-8f26-03bca251f04c 00:25:30.331 Thin Provisioning: Not Supported 00:25:30.331 Per-NS Atomic Units: Yes 00:25:30.331 Atomic Boundary Size (Normal): 0 00:25:30.331 Atomic Boundary Size (PFail): 0 00:25:30.331 Atomic Boundary Offset: 0 00:25:30.331 NGUID/EUI64 Never Reused: No 00:25:30.331 ANA group ID: 1 00:25:30.331 Namespace Write Protected: No 00:25:30.331 Number of LBA Formats: 1 00:25:30.331 Current LBA Format: LBA Format #00 00:25:30.331 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:30.331 00:25:30.331 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:30.331 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:30.353 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:30.353 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:30.353 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:30.353 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:30.353 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:30.353 rmmod nvme_tcp 00:25:30.613 rmmod nvme_fabrics 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.613 11:27:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:32.519 11:27:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:35.809 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:35.809 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:36.375 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:25:36.633 00:25:36.633 real 0m16.827s 00:25:36.633 user 0m4.417s 00:25:36.633 sys 0m8.737s 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.633 ************************************ 00:25:36.633 END TEST nvmf_identify_kernel_target 00:25:36.633 ************************************ 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.633 ************************************ 00:25:36.633 START TEST nvmf_auth_host 00:25:36.633 ************************************ 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:36.633 * Looking for test storage... 00:25:36.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:36.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:36.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.893 --rc genhtml_branch_coverage=1 00:25:36.893 --rc genhtml_function_coverage=1 00:25:36.893 --rc genhtml_legend=1 00:25:36.893 --rc geninfo_all_blocks=1 00:25:36.893 --rc geninfo_unexecuted_blocks=1 00:25:36.893 00:25:36.893 ' 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:36.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.893 --rc genhtml_branch_coverage=1 00:25:36.893 --rc genhtml_function_coverage=1 00:25:36.893 --rc genhtml_legend=1 00:25:36.893 --rc geninfo_all_blocks=1 00:25:36.893 --rc geninfo_unexecuted_blocks=1 00:25:36.893 00:25:36.893 ' 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:36.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.893 --rc genhtml_branch_coverage=1 00:25:36.893 --rc genhtml_function_coverage=1 00:25:36.893 --rc genhtml_legend=1 00:25:36.893 --rc geninfo_all_blocks=1 00:25:36.893 --rc geninfo_unexecuted_blocks=1 00:25:36.893 00:25:36.893 ' 00:25:36.893 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:36.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.894 --rc genhtml_branch_coverage=1 00:25:36.894 --rc genhtml_function_coverage=1 00:25:36.894 --rc genhtml_legend=1 00:25:36.894 --rc geninfo_all_blocks=1 00:25:36.894 --rc geninfo_unexecuted_blocks=1 00:25:36.894 00:25:36.894 ' 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:36.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:36.894 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:43.464 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:43.464 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:43.464 Found net devices under 0000:af:00.0: cvl_0_0 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.464 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:43.465 Found net devices under 0000:af:00.1: cvl_0_1 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:43.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:25:43.465 00:25:43.465 --- 10.0.0.2 ping statistics --- 00:25:43.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.465 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:25:43.465 00:25:43.465 --- 10.0.0.1 ping statistics --- 00:25:43.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.465 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1858275 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1858275 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1858275 ']' 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.465 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.724 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.724 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0faf351bcbbd3993967bb9eb0a331d47 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rCW 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0faf351bcbbd3993967bb9eb0a331d47 0 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0faf351bcbbd3993967bb9eb0a331d47 0 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0faf351bcbbd3993967bb9eb0a331d47 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rCW 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rCW 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rCW 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=de631dd774eb09106e1f0f105df26c9a857372faae78356d4b494e8dc4e3e9dc 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Xly 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key de631dd774eb09106e1f0f105df26c9a857372faae78356d4b494e8dc4e3e9dc 3 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 de631dd774eb09106e1f0f105df26c9a857372faae78356d4b494e8dc4e3e9dc 3 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=de631dd774eb09106e1f0f105df26c9a857372faae78356d4b494e8dc4e3e9dc 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Xly 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Xly 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Xly 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=42fcb5c1d902fbc927d89ae678854f2dcb1f58a4a88121bc 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7u3 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 42fcb5c1d902fbc927d89ae678854f2dcb1f58a4a88121bc 0 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 42fcb5c1d902fbc927d89ae678854f2dcb1f58a4a88121bc 0 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=42fcb5c1d902fbc927d89ae678854f2dcb1f58a4a88121bc 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7u3 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7u3 00:25:43.725 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.7u3 00:25:43.726 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8e1631a4fee947a8cf8473fe5926984dc49a9bb571bb17d1 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6sY 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8e1631a4fee947a8cf8473fe5926984dc49a9bb571bb17d1 2 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8e1631a4fee947a8cf8473fe5926984dc49a9bb571bb17d1 2 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8e1631a4fee947a8cf8473fe5926984dc49a9bb571bb17d1 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6sY 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6sY 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6sY 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0dd8f02acd4469470e204c6a5e33c3eb 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.igV 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0dd8f02acd4469470e204c6a5e33c3eb 1 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0dd8f02acd4469470e204c6a5e33c3eb 1 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0dd8f02acd4469470e204c6a5e33c3eb 00:25:43.985 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.igV 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.igV 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.igV 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f5c44a5c7e86a8fedadb81178c71cb78 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1Xk 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f5c44a5c7e86a8fedadb81178c71cb78 1 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f5c44a5c7e86a8fedadb81178c71cb78 1 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f5c44a5c7e86a8fedadb81178c71cb78 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1Xk 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1Xk 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.1Xk 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d34e89ca34b01dd265dacd280b82eb39e58c8c782a4d48ea 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.py7 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d34e89ca34b01dd265dacd280b82eb39e58c8c782a4d48ea 2 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d34e89ca34b01dd265dacd280b82eb39e58c8c782a4d48ea 2 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d34e89ca34b01dd265dacd280b82eb39e58c8c782a4d48ea 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.py7 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.py7 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.py7 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dc17b83113a9a1b9b0b2aa9d35b2ac68 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XHc 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dc17b83113a9a1b9b0b2aa9d35b2ac68 0 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dc17b83113a9a1b9b0b2aa9d35b2ac68 0 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dc17b83113a9a1b9b0b2aa9d35b2ac68 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:43.986 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XHc 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XHc 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XHc 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=da4ef551d9c0648b34980665ea138056023bc69eac1e446f6627bcc27d93456f 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Qp2 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key da4ef551d9c0648b34980665ea138056023bc69eac1e446f6627bcc27d93456f 3 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 da4ef551d9c0648b34980665ea138056023bc69eac1e446f6627bcc27d93456f 3 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=da4ef551d9c0648b34980665ea138056023bc69eac1e446f6627bcc27d93456f 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:44.245 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Qp2 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Qp2 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Qp2 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1858275 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1858275 ']' 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.245 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rCW 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Xly ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xly 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7u3 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6sY ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6sY 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.igV 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.1Xk ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Xk 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.py7 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XHc ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XHc 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Qp2 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:44.504 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:44.505 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:47.038 Waiting for block devices as requested 00:25:47.297 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:25:47.297 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:47.297 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:47.555 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:47.555 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:47.555 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:47.814 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:47.814 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:47.814 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:47.814 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:48.073 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:48.073 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:48.073 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:48.073 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:48.334 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:48.334 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:48.334 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:48.898 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:48.898 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:48.898 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:48.898 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:48.898 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:48.898 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:48.898 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:48.898 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:48.898 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:49.157 No valid GPT data, bailing 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:49.157 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:49.157 00:25:49.157 Discovery Log Number of Records 2, Generation counter 2 00:25:49.157 =====Discovery Log Entry 0====== 00:25:49.157 trtype: tcp 00:25:49.158 adrfam: ipv4 00:25:49.158 subtype: current discovery subsystem 00:25:49.158 treq: not specified, sq flow control disable supported 00:25:49.158 portid: 1 00:25:49.158 trsvcid: 4420 00:25:49.158 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:49.158 traddr: 10.0.0.1 00:25:49.158 eflags: none 00:25:49.158 sectype: none 00:25:49.158 =====Discovery Log Entry 1====== 00:25:49.158 trtype: tcp 00:25:49.158 adrfam: ipv4 00:25:49.158 subtype: nvme subsystem 00:25:49.158 treq: not specified, sq flow control disable supported 00:25:49.158 portid: 1 00:25:49.158 trsvcid: 4420 00:25:49.158 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:49.158 traddr: 10.0.0.1 00:25:49.158 eflags: none 00:25:49.158 sectype: none 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:49.158 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.158 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.416 nvme0n1 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.416 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.417 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.417 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.417 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.417 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.417 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.417 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.417 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.417 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.675 nvme0n1 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.675 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.934 nvme0n1 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:25:49.934 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.935 nvme0n1 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.935 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.194 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.195 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.195 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.195 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.195 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.195 nvme0n1 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.195 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.454 nvme0n1 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:50.454 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.455 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 nvme0n1 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.748 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.749 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.749 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.749 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.749 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.749 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.749 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.749 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:50.749 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.749 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.050 nvme0n1 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.050 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.348 nvme0n1 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.348 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.349 nvme0n1 00:25:51.349 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.608 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.609 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.609 nvme0n1 00:25:51.609 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.609 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.609 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.609 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.609 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.609 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.867 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.867 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.868 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 nvme0n1 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.128 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.394 nvme0n1 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.394 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 nvme0n1 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.654 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.655 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.911 nvme0n1 00:25:52.911 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.911 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.911 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.911 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.911 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.911 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.167 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.168 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.425 nvme0n1 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.426 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.683 nvme0n1 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.940 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.199 nvme0n1 00:25:54.199 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.199 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.199 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.199 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.767 nvme0n1 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.767 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.768 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.768 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.768 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.026 nvme0n1 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.026 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.592 nvme0n1 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:55.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.593 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.160 nvme0n1 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.160 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.161 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.730 nvme0n1 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.730 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.731 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.731 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.731 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.731 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.300 nvme0n1 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.300 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.868 nvme0n1 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.868 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.127 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.696 nvme0n1 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.696 nvme0n1 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.696 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.697 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:58.697 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.697 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.956 nvme0n1 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:58.956 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.957 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.216 nvme0n1 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.216 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.475 nvme0n1 00:25:59.475 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.475 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.475 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.476 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.735 nvme0n1 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.735 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.736 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.995 nvme0n1 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.996 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.255 nvme0n1 00:26:00.255 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.255 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.255 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.255 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.255 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.255 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.255 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.256 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.515 nvme0n1 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.515 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.516 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.516 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.516 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.516 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.516 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.775 nvme0n1 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.775 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.035 nvme0n1 00:26:01.035 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.035 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.036 11:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.296 nvme0n1 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.296 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.555 nvme0n1 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:01.555 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.556 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.872 nvme0n1 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:01.873 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.131 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.131 nvme0n1 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.131 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.389 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.646 nvme0n1 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.646 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.647 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.904 nvme0n1 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.904 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.905 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.162 11:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.420 nvme0n1 00:26:03.420 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.420 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.420 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.421 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.987 nvme0n1 00:26:03.987 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.987 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.987 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.987 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.987 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.987 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.987 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.987 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.987 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.988 11:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.246 nvme0n1 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.246 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.247 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.826 nvme0n1 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:04.826 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.827 11:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 nvme0n1 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.395 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.963 nvme0n1 00:26:05.963 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.963 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.963 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.963 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.964 11:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.532 nvme0n1 00:26:06.532 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.533 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.101 nvme0n1 00:26:07.101 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.101 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.101 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.101 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.101 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.101 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.101 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.101 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.101 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.101 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.359 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.928 nvme0n1 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.928 nvme0n1 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.928 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.188 11:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.188 nvme0n1 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.188 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.448 nvme0n1 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.448 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.707 nvme0n1 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.707 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.708 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.967 nvme0n1 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.967 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.226 nvme0n1 00:26:09.226 11:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.226 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.226 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.227 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.486 nvme0n1 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.486 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.745 nvme0n1 00:26:09.745 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.745 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.745 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.746 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.005 nvme0n1 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.005 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.006 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.264 nvme0n1 00:26:10.264 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.264 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.264 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.265 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.524 nvme0n1 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.524 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.783 nvme0n1 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.783 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.042 nvme0n1 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.042 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.043 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.302 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.302 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.302 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.302 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.302 11:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.302 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.561 nvme0n1 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.561 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.820 nvme0n1 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.820 11:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.387 nvme0n1 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.387 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.645 nvme0n1 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.645 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.209 nvme0n1 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.209 11:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.467 nvme0n1 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.467 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.725 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.726 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.726 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.726 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.985 nvme0n1 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGZhZjM1MWJjYmJkMzk5Mzk2N2JiOWViMGEzMzFkNDeWK7sv: 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: ]] 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2MzFkZDc3NGViMDkxMDZlMWYwZjEwNWRmMjZjOWE4NTczNzJmYWFlNzgzNTZkNGI0OTRlOGRjNGUzZTlkY5swZRs=: 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.985 11:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.553 nvme0n1 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.553 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.812 11:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.380 nvme0n1 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.380 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.948 nvme0n1 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0ZTg5Y2EzNGIwMWRkMjY1ZGFjZDI4MGI4MmViMzllNThjOGM3ODJhNGQ0OGVh1+clxQ==: 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: ]] 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxN2I4MzExM2E5YTFiOWIwYjJhYTlkMzViMmFjNjjDKelN: 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.948 11:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 nvme0n1 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE0ZWY1NTFkOWMwNjQ4YjM0OTgwNjY1ZWExMzgwNTYwMjNiYzY5ZWFjMWU0NDZmNjYyN2JjYzI3ZDkzNDU2Zk+rywQ=: 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.516 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.517 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.086 nvme0n1 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.086 11:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.086 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.346 request: 00:26:17.346 { 00:26:17.346 "name": "nvme0", 00:26:17.346 "trtype": "tcp", 00:26:17.346 "traddr": "10.0.0.1", 00:26:17.346 "adrfam": "ipv4", 00:26:17.346 "trsvcid": "4420", 00:26:17.346 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:17.346 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:17.346 "prchk_reftag": false, 00:26:17.346 "prchk_guard": false, 00:26:17.346 "hdgst": false, 00:26:17.346 "ddgst": false, 00:26:17.346 "allow_unrecognized_csi": false, 00:26:17.346 "method": "bdev_nvme_attach_controller", 00:26:17.346 "req_id": 1 00:26:17.346 } 00:26:17.346 Got JSON-RPC error response 00:26:17.346 response: 00:26:17.346 { 00:26:17.346 "code": -5, 00:26:17.346 "message": "Input/output error" 00:26:17.346 } 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.346 request: 00:26:17.346 { 00:26:17.346 "name": "nvme0", 00:26:17.346 "trtype": "tcp", 00:26:17.346 "traddr": "10.0.0.1", 00:26:17.346 "adrfam": "ipv4", 00:26:17.346 "trsvcid": "4420", 00:26:17.346 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:17.346 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:17.346 "prchk_reftag": false, 00:26:17.346 "prchk_guard": false, 00:26:17.346 "hdgst": false, 00:26:17.346 "ddgst": false, 00:26:17.346 "dhchap_key": "key2", 00:26:17.346 "allow_unrecognized_csi": false, 00:26:17.346 "method": "bdev_nvme_attach_controller", 00:26:17.346 "req_id": 1 00:26:17.346 } 00:26:17.346 Got JSON-RPC error response 00:26:17.346 response: 00:26:17.346 { 00:26:17.346 "code": -5, 00:26:17.346 "message": "Input/output error" 00:26:17.346 } 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:17.346 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:17.347 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.347 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:17.347 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.347 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:17.347 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.347 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.606 request: 00:26:17.606 { 00:26:17.606 "name": "nvme0", 00:26:17.606 "trtype": "tcp", 00:26:17.606 "traddr": "10.0.0.1", 00:26:17.606 "adrfam": "ipv4", 00:26:17.606 "trsvcid": "4420", 00:26:17.606 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:17.606 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:17.606 "prchk_reftag": false, 00:26:17.606 "prchk_guard": false, 00:26:17.606 "hdgst": false, 00:26:17.606 "ddgst": false, 00:26:17.606 "dhchap_key": "key1", 00:26:17.606 "dhchap_ctrlr_key": "ckey2", 00:26:17.606 "allow_unrecognized_csi": false, 00:26:17.606 "method": "bdev_nvme_attach_controller", 00:26:17.606 "req_id": 1 00:26:17.606 } 00:26:17.606 Got JSON-RPC error response 00:26:17.606 response: 00:26:17.606 { 00:26:17.606 "code": -5, 00:26:17.606 "message": "Input/output error" 00:26:17.606 } 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.606 nvme0n1 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.606 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:17.607 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:17.607 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.607 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.607 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:17.607 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:17.607 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:17.607 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.607 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.607 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.865 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.865 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:17.865 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.865 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.865 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.865 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.866 request: 00:26:17.866 { 00:26:17.866 "name": "nvme0", 00:26:17.866 "dhchap_key": "key1", 00:26:17.866 "dhchap_ctrlr_key": "ckey2", 00:26:17.866 "method": "bdev_nvme_set_keys", 00:26:17.866 "req_id": 1 00:26:17.866 } 00:26:17.866 Got JSON-RPC error response 00:26:17.866 response: 00:26:17.866 { 00:26:17.866 "code": -13, 00:26:17.866 "message": "Permission denied" 00:26:17.866 } 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:17.866 11:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:18.802 11:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.802 11:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:18.802 11:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.802 11:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.802 11:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.061 11:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:19.061 11:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJmY2I1YzFkOTAyZmJjOTI3ZDg5YWU2Nzg4NTRmMmRjYjFmNThhNGE4ODEyMWJj37CUvw==: 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: ]] 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUxNjMxYTRmZWU5NDdhOGNmODQ3M2ZlNTkyNjk4NGRjNDlhOWJiNTcxYmIxN2Qxe0bQUg==: 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.998 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.257 nvme0n1 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRkOGYwMmFjZDQ0Njk0NzBlMjA0YzZhNWUzM2MzZWL8qiLg: 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: ]] 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjVjNDRhNWM3ZTg2YThmZWRhZGI4MTE3OGM3MWNiNzhzSdYa: 00:26:20.257 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:20.258 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.258 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:20.258 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.258 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.258 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.258 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.258 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:20.258 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.258 11:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.258 request: 00:26:20.258 { 00:26:20.258 "name": "nvme0", 00:26:20.258 "dhchap_key": "key2", 00:26:20.258 "dhchap_ctrlr_key": "ckey1", 00:26:20.258 "method": "bdev_nvme_set_keys", 00:26:20.258 "req_id": 1 00:26:20.258 } 00:26:20.258 Got JSON-RPC error response 00:26:20.258 response: 00:26:20.258 { 00:26:20.258 "code": -13, 00:26:20.258 "message": "Permission denied" 00:26:20.258 } 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:20.258 11:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:21.247 rmmod nvme_tcp 00:26:21.247 rmmod nvme_fabrics 00:26:21.247 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:21.248 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:21.248 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:21.248 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1858275 ']' 00:26:21.248 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1858275 00:26:21.248 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1858275 ']' 00:26:21.248 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1858275 00:26:21.248 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:21.248 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.248 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1858275 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1858275' 00:26:21.507 killing process with pid 1858275 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1858275 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1858275 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.507 11:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:24.064 11:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:26.594 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:26.594 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:27.530 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:26:27.530 11:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rCW /tmp/spdk.key-null.7u3 /tmp/spdk.key-sha256.igV /tmp/spdk.key-sha384.py7 /tmp/spdk.key-sha512.Qp2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:27.530 11:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:30.823 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:30.823 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:30.823 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:30.823 00:26:30.823 real 0m53.895s 00:26:30.823 user 0m48.705s 00:26:30.823 sys 0m12.669s 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.823 ************************************ 00:26:30.823 END TEST nvmf_auth_host 00:26:30.823 ************************************ 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.823 ************************************ 00:26:30.823 START TEST nvmf_digest 00:26:30.823 ************************************ 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:30.823 * Looking for test storage... 00:26:30.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.823 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:30.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.824 --rc genhtml_branch_coverage=1 00:26:30.824 --rc genhtml_function_coverage=1 00:26:30.824 --rc genhtml_legend=1 00:26:30.824 --rc geninfo_all_blocks=1 00:26:30.824 --rc geninfo_unexecuted_blocks=1 00:26:30.824 00:26:30.824 ' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:30.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.824 --rc genhtml_branch_coverage=1 00:26:30.824 --rc genhtml_function_coverage=1 00:26:30.824 --rc genhtml_legend=1 00:26:30.824 --rc geninfo_all_blocks=1 00:26:30.824 --rc geninfo_unexecuted_blocks=1 00:26:30.824 00:26:30.824 ' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:30.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.824 --rc genhtml_branch_coverage=1 00:26:30.824 --rc genhtml_function_coverage=1 00:26:30.824 --rc genhtml_legend=1 00:26:30.824 --rc geninfo_all_blocks=1 00:26:30.824 --rc geninfo_unexecuted_blocks=1 00:26:30.824 00:26:30.824 ' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:30.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.824 --rc genhtml_branch_coverage=1 00:26:30.824 --rc genhtml_function_coverage=1 00:26:30.824 --rc genhtml_legend=1 00:26:30.824 --rc geninfo_all_blocks=1 00:26:30.824 --rc geninfo_unexecuted_blocks=1 00:26:30.824 00:26:30.824 ' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:30.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:30.824 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:37.394 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:37.394 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:37.394 Found net devices under 0000:af:00.0: cvl_0_0 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:37.394 Found net devices under 0000:af:00.1: cvl_0_1 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:37.394 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:37.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:26:37.395 00:26:37.395 --- 10.0.0.2 ping statistics --- 00:26:37.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.395 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:26:37.395 00:26:37.395 --- 10.0.0.1 ping statistics --- 00:26:37.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.395 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:37.395 ************************************ 00:26:37.395 START TEST nvmf_digest_clean 00:26:37.395 ************************************ 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1872670 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1872670 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1872670 ']' 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:37.395 [2024-12-06 11:28:09.659640] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:26:37.395 [2024-12-06 11:28:09.659680] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.395 [2024-12-06 11:28:09.735599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.395 [2024-12-06 11:28:09.773408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.395 [2024-12-06 11:28:09.773444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.395 [2024-12-06 11:28:09.773451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.395 [2024-12-06 11:28:09.773457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.395 [2024-12-06 11:28:09.773461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.395 [2024-12-06 11:28:09.774004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:37.395 null0 00:26:37.395 [2024-12-06 11:28:09.920384] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.395 [2024-12-06 11:28:09.944571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:37.395 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1872870 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1872870 /var/tmp/bperf.sock 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1872870 ']' 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.396 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:37.396 [2024-12-06 11:28:09.998901] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:26:37.396 [2024-12-06 11:28:09.998941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872870 ] 00:26:37.396 [2024-12-06 11:28:10.075319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.396 [2024-12-06 11:28:10.117659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.396 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.396 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:37.396 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:37.396 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:37.396 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:37.655 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.655 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.914 nvme0n1 00:26:37.914 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:37.914 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:37.914 Running I/O for 2 seconds... 00:26:40.228 28052.00 IOPS, 109.58 MiB/s [2024-12-06T10:28:13.166Z] 27609.00 IOPS, 107.85 MiB/s 00:26:40.228 Latency(us) 00:26:40.228 [2024-12-06T10:28:13.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.228 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:40.228 nvme0n1 : 2.01 27623.78 107.91 0.00 0.00 4628.03 2234.18 12988.04 00:26:40.228 [2024-12-06T10:28:13.166Z] =================================================================================================================== 00:26:40.228 [2024-12-06T10:28:13.166Z] Total : 27623.78 107.91 0.00 0.00 4628.03 2234.18 12988.04 00:26:40.228 { 00:26:40.228 "results": [ 00:26:40.228 { 00:26:40.228 "job": "nvme0n1", 00:26:40.228 "core_mask": "0x2", 00:26:40.228 "workload": "randread", 00:26:40.228 "status": "finished", 00:26:40.228 "queue_depth": 128, 00:26:40.228 "io_size": 4096, 00:26:40.228 "runtime": 2.005084, 00:26:40.228 "iops": 27623.78035034941, 00:26:40.228 "mibps": 107.90539199355239, 00:26:40.228 "io_failed": 0, 00:26:40.228 "io_timeout": 0, 00:26:40.228 "avg_latency_us": 4628.034949217749, 00:26:40.228 "min_latency_us": 2234.181818181818, 00:26:40.228 "max_latency_us": 12988.043636363636 00:26:40.228 } 00:26:40.228 ], 00:26:40.228 "core_count": 1 00:26:40.228 } 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:40.228 | select(.opcode=="crc32c") 00:26:40.228 | "\(.module_name) \(.executed)"' 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1872870 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1872870 ']' 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1872870 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.228 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1872870 00:26:40.228 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:40.228 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:40.228 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1872870' 00:26:40.228 killing process with pid 1872870 00:26:40.228 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1872870 00:26:40.228 Received shutdown signal, test time was about 2.000000 seconds 00:26:40.228 00:26:40.228 Latency(us) 00:26:40.228 [2024-12-06T10:28:13.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.228 [2024-12-06T10:28:13.166Z] =================================================================================================================== 00:26:40.228 [2024-12-06T10:28:13.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.228 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1872870 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1873444 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1873444 /var/tmp/bperf.sock 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1873444 ']' 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:40.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.488 [2024-12-06 11:28:13.227923] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:26:40.488 [2024-12-06 11:28:13.227969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873444 ] 00:26:40.488 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.488 Zero copy mechanism will not be used. 00:26:40.488 [2024-12-06 11:28:13.302358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.488 [2024-12-06 11:28:13.339187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:40.488 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.747 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.747 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.006 nvme0n1 00:26:41.006 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:41.006 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:41.265 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:41.265 Zero copy mechanism will not be used. 00:26:41.265 Running I/O for 2 seconds... 00:26:43.136 6112.00 IOPS, 764.00 MiB/s [2024-12-06T10:28:16.074Z] 5863.00 IOPS, 732.88 MiB/s 00:26:43.136 Latency(us) 00:26:43.136 [2024-12-06T10:28:16.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.136 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:43.136 nvme0n1 : 2.00 5862.59 732.82 0.00 0.00 2726.61 610.68 6315.29 00:26:43.136 [2024-12-06T10:28:16.074Z] =================================================================================================================== 00:26:43.136 [2024-12-06T10:28:16.074Z] Total : 5862.59 732.82 0.00 0.00 2726.61 610.68 6315.29 00:26:43.136 { 00:26:43.136 "results": [ 00:26:43.136 { 00:26:43.136 "job": "nvme0n1", 00:26:43.136 "core_mask": "0x2", 00:26:43.136 "workload": "randread", 00:26:43.136 "status": "finished", 00:26:43.136 "queue_depth": 16, 00:26:43.136 "io_size": 131072, 00:26:43.136 "runtime": 2.002869, 00:26:43.136 "iops": 5862.590114480778, 00:26:43.136 "mibps": 732.8237643100972, 00:26:43.136 "io_failed": 0, 00:26:43.136 "io_timeout": 0, 00:26:43.136 "avg_latency_us": 2726.6065647791143, 00:26:43.136 "min_latency_us": 610.6763636363636, 00:26:43.136 "max_latency_us": 6315.2872727272725 00:26:43.136 } 00:26:43.136 ], 00:26:43.136 "core_count": 1 00:26:43.136 } 00:26:43.136 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:43.136 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:43.136 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:43.136 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:43.136 | select(.opcode=="crc32c") 00:26:43.136 | "\(.module_name) \(.executed)"' 00:26:43.136 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1873444 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1873444 ']' 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1873444 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1873444 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1873444' 00:26:43.395 killing process with pid 1873444 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1873444 00:26:43.395 Received shutdown signal, test time was about 2.000000 seconds 00:26:43.395 00:26:43.395 Latency(us) 00:26:43.395 [2024-12-06T10:28:16.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.395 [2024-12-06T10:28:16.333Z] =================================================================================================================== 00:26:43.395 [2024-12-06T10:28:16.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.395 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1873444 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1873982 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1873982 /var/tmp/bperf.sock 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1873982 ']' 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:43.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:43.654 [2024-12-06 11:28:16.441638] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:26:43.654 [2024-12-06 11:28:16.441687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873982 ] 00:26:43.654 [2024-12-06 11:28:16.515504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.654 [2024-12-06 11:28:16.554479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:43.654 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:43.913 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.913 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.480 nvme0n1 00:26:44.480 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:44.480 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.480 Running I/O for 2 seconds... 00:26:46.352 29984.00 IOPS, 117.12 MiB/s [2024-12-06T10:28:19.290Z] 29992.00 IOPS, 117.16 MiB/s 00:26:46.352 Latency(us) 00:26:46.352 [2024-12-06T10:28:19.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.352 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:46.352 nvme0n1 : 2.01 29991.64 117.15 0.00 0.00 4261.15 2129.92 12153.95 00:26:46.352 [2024-12-06T10:28:19.290Z] =================================================================================================================== 00:26:46.352 [2024-12-06T10:28:19.290Z] Total : 29991.64 117.15 0.00 0.00 4261.15 2129.92 12153.95 00:26:46.352 { 00:26:46.352 "results": [ 00:26:46.352 { 00:26:46.352 "job": "nvme0n1", 00:26:46.352 "core_mask": "0x2", 00:26:46.352 "workload": "randwrite", 00:26:46.352 "status": "finished", 00:26:46.352 "queue_depth": 128, 00:26:46.352 "io_size": 4096, 00:26:46.352 "runtime": 2.005359, 00:26:46.352 "iops": 29991.637407566424, 00:26:46.352 "mibps": 117.15483362330635, 00:26:46.352 "io_failed": 0, 00:26:46.352 "io_timeout": 0, 00:26:46.352 "avg_latency_us": 4261.154392125564, 00:26:46.352 "min_latency_us": 2129.92, 00:26:46.352 "max_latency_us": 12153.949090909091 00:26:46.352 } 00:26:46.352 ], 00:26:46.352 "core_count": 1 00:26:46.352 } 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.611 | select(.opcode=="crc32c") 00:26:46.611 | "\(.module_name) \(.executed)"' 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1873982 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1873982 ']' 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1873982 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.611 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1873982 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1873982' 00:26:46.870 killing process with pid 1873982 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1873982 00:26:46.870 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.870 00:26:46.870 Latency(us) 00:26:46.870 [2024-12-06T10:28:19.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.870 [2024-12-06T10:28:19.808Z] =================================================================================================================== 00:26:46.870 [2024-12-06T10:28:19.808Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1873982 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1874523 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1874523 /var/tmp/bperf.sock 00:26:46.870 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:46.871 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1874523 ']' 00:26:46.871 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:46.871 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.871 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:46.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:46.871 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.871 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.871 [2024-12-06 11:28:19.753587] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:26:46.871 [2024-12-06 11:28:19.753630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1874523 ] 00:26:46.871 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:46.871 Zero copy mechanism will not be used. 00:26:47.130 [2024-12-06 11:28:19.826727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.130 [2024-12-06 11:28:19.861097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.130 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.130 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:47.130 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:47.130 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:47.130 11:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:47.390 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.390 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.649 nvme0n1 00:26:47.649 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:47.649 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.649 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:47.649 Zero copy mechanism will not be used. 00:26:47.649 Running I/O for 2 seconds... 00:26:49.958 6871.00 IOPS, 858.88 MiB/s [2024-12-06T10:28:22.896Z] 7158.50 IOPS, 894.81 MiB/s 00:26:49.958 Latency(us) 00:26:49.958 [2024-12-06T10:28:22.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.958 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:49.958 nvme0n1 : 2.00 7155.04 894.38 0.00 0.00 2232.23 1817.13 10962.39 00:26:49.958 [2024-12-06T10:28:22.896Z] =================================================================================================================== 00:26:49.958 [2024-12-06T10:28:22.896Z] Total : 7155.04 894.38 0.00 0.00 2232.23 1817.13 10962.39 00:26:49.958 { 00:26:49.958 "results": [ 00:26:49.958 { 00:26:49.958 "job": "nvme0n1", 00:26:49.958 "core_mask": "0x2", 00:26:49.958 "workload": "randwrite", 00:26:49.958 "status": "finished", 00:26:49.958 "queue_depth": 16, 00:26:49.958 "io_size": 131072, 00:26:49.958 "runtime": 2.003204, 00:26:49.958 "iops": 7155.037629717193, 00:26:49.958 "mibps": 894.3797037146492, 00:26:49.958 "io_failed": 0, 00:26:49.958 "io_timeout": 0, 00:26:49.958 "avg_latency_us": 2232.227902805351, 00:26:49.958 "min_latency_us": 1817.1345454545456, 00:26:49.958 "max_latency_us": 10962.385454545454 00:26:49.958 } 00:26:49.958 ], 00:26:49.958 "core_count": 1 00:26:49.958 } 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:49.958 | select(.opcode=="crc32c") 00:26:49.958 | "\(.module_name) \(.executed)"' 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1874523 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1874523 ']' 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1874523 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1874523 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1874523' 00:26:49.958 killing process with pid 1874523 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1874523 00:26:49.958 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.958 00:26:49.958 Latency(us) 00:26:49.958 [2024-12-06T10:28:22.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.958 [2024-12-06T10:28:22.896Z] =================================================================================================================== 00:26:49.958 [2024-12-06T10:28:22.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.958 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1874523 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1872670 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1872670 ']' 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1872670 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1872670 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1872670' 00:26:50.216 killing process with pid 1872670 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1872670 00:26:50.216 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1872670 00:26:50.216 00:26:50.216 real 0m13.528s 00:26:50.216 user 0m25.575s 00:26:50.216 sys 0m4.641s 00:26:50.216 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:50.216 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:50.216 ************************************ 00:26:50.216 END TEST nvmf_digest_clean 00:26:50.216 ************************************ 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:50.473 ************************************ 00:26:50.473 START TEST nvmf_digest_error 00:26:50.473 ************************************ 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1875107 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1875107 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1875107 ']' 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.473 [2024-12-06 11:28:23.254759] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:26:50.473 [2024-12-06 11:28:23.254798] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.473 [2024-12-06 11:28:23.327244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.473 [2024-12-06 11:28:23.364865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.473 [2024-12-06 11:28:23.364900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.473 [2024-12-06 11:28:23.364906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.473 [2024-12-06 11:28:23.364911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.473 [2024-12-06 11:28:23.364916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.473 [2024-12-06 11:28:23.365496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:50.473 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.731 [2024-12-06 11:28:23.429922] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.731 null0 00:26:50.731 [2024-12-06 11:28:23.525275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.731 [2024-12-06 11:28:23.549476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1875311 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1875311 /var/tmp/bperf.sock 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1875311 ']' 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.731 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.731 [2024-12-06 11:28:23.599329] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:26:50.731 [2024-12-06 11:28:23.599368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875311 ] 00:26:50.989 [2024-12-06 11:28:23.671650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.989 [2024-12-06 11:28:23.711124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.989 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.989 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:50.989 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:50.989 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:51.246 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:51.246 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.246 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.246 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.246 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.246 11:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.503 nvme0n1 00:26:51.503 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:51.503 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.503 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.503 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.503 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:51.503 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.761 Running I/O for 2 seconds... 00:26:51.761 [2024-12-06 11:28:24.510472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.510503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.510513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.521324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.521347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.521355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.529206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.529226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.529234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.541077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.541098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.541106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.551438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.551457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.551465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.559316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.559335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.559342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.569923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.569942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.569949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.581248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.581271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.581278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.588689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.588708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.588716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.599607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.599627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.599634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.610281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.610299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.610307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.618402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.618421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.618428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.629906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.629925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.629932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.640042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.640066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.640075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.650483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.650502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.650510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.658031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.658051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.658065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.667726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.667746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.761 [2024-12-06 11:28:24.667754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.761 [2024-12-06 11:28:24.679290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.761 [2024-12-06 11:28:24.679310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.762 [2024-12-06 11:28:24.679318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.762 [2024-12-06 11:28:24.690546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:51.762 [2024-12-06 11:28:24.690567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.762 [2024-12-06 11:28:24.690574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.019 [2024-12-06 11:28:24.699204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.019 [2024-12-06 11:28:24.699225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.019 [2024-12-06 11:28:24.699234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.019 [2024-12-06 11:28:24.708983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.019 [2024-12-06 11:28:24.709003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.019 [2024-12-06 11:28:24.709011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.019 [2024-12-06 11:28:24.716874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.019 [2024-12-06 11:28:24.716894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.019 [2024-12-06 11:28:24.716901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.019 [2024-12-06 11:28:24.728416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.019 [2024-12-06 11:28:24.728437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.019 [2024-12-06 11:28:24.728445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.019 [2024-12-06 11:28:24.736858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.019 [2024-12-06 11:28:24.736878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.019 [2024-12-06 11:28:24.736886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.019 [2024-12-06 11:28:24.744836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.019 [2024-12-06 11:28:24.744856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.019 [2024-12-06 11:28:24.744867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.019 [2024-12-06 11:28:24.755579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.019 [2024-12-06 11:28:24.755600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.019 [2024-12-06 11:28:24.755609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.019 [2024-12-06 11:28:24.764702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.019 [2024-12-06 11:28:24.764720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.019 [2024-12-06 11:28:24.764728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.019 [2024-12-06 11:28:24.772694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.019 [2024-12-06 11:28:24.772714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.772722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.781279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.781299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.781307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.790137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.790157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.790165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.800391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.800410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.800418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.808917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.808936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.808944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.817692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.817711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.817719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.825886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.825905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.825912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.834138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.834157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.834165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.842608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.842627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.842634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.851884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.851904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.851911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.860314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.860333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.860341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.868962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.868981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.868989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.876842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.876861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.876870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.885622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.885641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.885648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.893954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.893974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.893987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.903137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.903156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.903164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.911770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.911789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.911797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.919397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.919417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.919424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.929762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.929782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.929790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.939145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.939164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.939171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.020 [2024-12-06 11:28:24.948023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.020 [2024-12-06 11:28:24.948041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.020 [2024-12-06 11:28:24.948048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.277 [2024-12-06 11:28:24.956849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.277 [2024-12-06 11:28:24.956868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.277 [2024-12-06 11:28:24.956876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.277 [2024-12-06 11:28:24.965087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.277 [2024-12-06 11:28:24.965106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.277 [2024-12-06 11:28:24.965113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.277 [2024-12-06 11:28:24.974241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.277 [2024-12-06 11:28:24.974264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.277 [2024-12-06 11:28:24.974272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.277 [2024-12-06 11:28:24.983012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.277 [2024-12-06 11:28:24.983031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.277 [2024-12-06 11:28:24.983038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.277 [2024-12-06 11:28:24.990566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.277 [2024-12-06 11:28:24.990586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.277 [2024-12-06 11:28:24.990593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.277 [2024-12-06 11:28:24.999430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.277 [2024-12-06 11:28:24.999448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.277 [2024-12-06 11:28:24.999456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.277 [2024-12-06 11:28:25.008089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.277 [2024-12-06 11:28:25.008107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.277 [2024-12-06 11:28:25.008115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.017635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.017654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.017661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.025608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.025627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.025634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.034110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.034129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.034136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.043244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.043263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.043271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.052242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.052261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.052268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.059777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.059795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.059802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.068651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.068670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.068677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.077384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.077403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.077411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.085859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.085878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.085886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.094266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.094286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.094293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.102754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.102773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.102780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.111194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.111214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.111221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.119777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.119795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.119806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.128788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.128807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.128815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.136384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.136404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.136412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.146044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.146069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.146076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.154763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.154780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.154788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.162694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.162713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.162721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.171313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.171332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.171339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.180447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.180466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.180474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.188307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.188326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.188333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.197484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.197506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.197514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.278 [2024-12-06 11:28:25.205618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.278 [2024-12-06 11:28:25.205637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.278 [2024-12-06 11:28:25.205645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.214918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.214938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.214945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.224389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.224407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.224414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.233864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.233882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.233889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.242620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.242639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.242647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.253006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.253025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.253032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.260944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.260964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.260972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.272561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.272580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.272588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.283240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.283259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.283266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.290642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.290661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.290668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.301832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.301851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.301858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.312355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.312374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.312381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.324849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.324868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.324876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.332384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.332402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.332409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.342645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.342663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.342672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.354444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.354463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.354470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.365916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.365938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.365946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.373472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.373491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.373498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.384020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.384040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.384047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.395605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.395624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.395632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.405793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.405811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.405819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.417489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.417507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.417514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.425243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.536 [2024-12-06 11:28:25.425262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.536 [2024-12-06 11:28:25.425269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.536 [2024-12-06 11:28:25.437051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.537 [2024-12-06 11:28:25.437075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.537 [2024-12-06 11:28:25.437083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.537 [2024-12-06 11:28:25.448351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.537 [2024-12-06 11:28:25.448370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.537 [2024-12-06 11:28:25.448377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.537 [2024-12-06 11:28:25.457820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.537 [2024-12-06 11:28:25.457838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.537 [2024-12-06 11:28:25.457845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.537 [2024-12-06 11:28:25.465471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.537 [2024-12-06 11:28:25.465491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.537 [2024-12-06 11:28:25.465498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.794 [2024-12-06 11:28:25.475211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.794 [2024-12-06 11:28:25.475231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.794 [2024-12-06 11:28:25.475239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.794 [2024-12-06 11:28:25.486669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.794 [2024-12-06 11:28:25.486689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.794 [2024-12-06 11:28:25.486696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.794 27170.00 IOPS, 106.13 MiB/s [2024-12-06T10:28:25.732Z] [2024-12-06 11:28:25.497034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.794 [2024-12-06 11:28:25.497053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.794 [2024-12-06 11:28:25.497069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.794 [2024-12-06 11:28:25.506652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.794 [2024-12-06 11:28:25.506671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.794 [2024-12-06 11:28:25.506679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.794 [2024-12-06 11:28:25.518004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.794 [2024-12-06 11:28:25.518023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.794 [2024-12-06 11:28:25.518031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.794 [2024-12-06 11:28:25.529488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.794 [2024-12-06 11:28:25.529507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.794 [2024-12-06 11:28:25.529514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.794 [2024-12-06 11:28:25.537685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.794 [2024-12-06 11:28:25.537703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.794 [2024-12-06 11:28:25.537714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.794 [2024-12-06 11:28:25.548758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.794 [2024-12-06 11:28:25.548777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.548784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.558718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.558736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.558743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.566585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.566604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.566611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.575555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.575574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.575581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.585156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.585176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.585183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.593329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.593347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.593355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.602486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.602505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.602513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.613381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.613400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.613408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.624813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.624835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.624842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.636407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.636427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.636435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.646363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.646381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.646388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.655212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.655230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.655238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.666051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.666074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.666082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.676943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.676961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.676969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.684373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.684392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.684400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.694793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.694812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.694819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.705557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.705575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.705586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.713332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.713351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.713358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.795 [2024-12-06 11:28:25.724950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:52.795 [2024-12-06 11:28:25.724969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.795 [2024-12-06 11:28:25.724977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.735612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.735631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.735638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.744083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.744101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.744109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.755042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.755065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.755073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.764874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.764892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.764900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.774866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.774886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.774894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.783047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.783073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.783080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.794310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.794333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.794341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.806274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.806294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.806301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.813896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.813914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.813922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.825142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.825162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.825169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.836685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.836703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.836710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.847613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.847632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.847639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.858133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.858152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.858160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.866438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.866457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-12-06 11:28:25.866465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-12-06 11:28:25.874977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.054 [2024-12-06 11:28:25.874996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.875003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.886002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.886021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.886028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.897011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.897031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.897038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.906698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.906717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.906724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.915487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.915506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.915513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.923346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.923365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.923372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.932322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.932341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.932348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.940129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.940149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.940156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.950454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.950474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.950481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.960211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.960230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.960240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.971643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.971663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.971671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.055 [2024-12-06 11:28:25.979283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.055 [2024-12-06 11:28:25.979302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.055 [2024-12-06 11:28:25.979310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.313 [2024-12-06 11:28:25.990539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.313 [2024-12-06 11:28:25.990559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.313 [2024-12-06 11:28:25.990566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.313 [2024-12-06 11:28:26.001486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.313 [2024-12-06 11:28:26.001506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.313 [2024-12-06 11:28:26.001513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.313 [2024-12-06 11:28:26.009361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.009380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.009388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.021383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.021403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.021410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.029011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.029029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.029037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.039386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.039404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.039412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.050942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.050966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.050973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.061876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.061895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.061903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.070088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.070108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.070116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.080893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.080911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.080919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.092060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.092079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.092087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.099983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.100003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.100011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.111690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.111710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.111717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.123654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.123674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.123682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.131178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.131197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.131204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.142244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.142264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.142273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.152450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.152470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.152477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.163760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.163779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.163786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.172139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.172158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.172166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.182931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.182951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.182959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.192740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.192760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.192768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.201454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.201474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.201482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.210102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.210123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.210130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.219493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.219516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.219523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.228396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.228416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-12-06 11:28:26.228423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-12-06 11:28:26.236289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.314 [2024-12-06 11:28:26.236309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.315 [2024-12-06 11:28:26.236317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.315 [2024-12-06 11:28:26.244276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.315 [2024-12-06 11:28:26.244295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.315 [2024-12-06 11:28:26.244303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.255157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.255178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.255185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.262170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.262189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.262196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.272929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.272949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.272956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.284301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.284320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.284328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.294281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.294301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.294309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.301857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.301877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.301885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.311368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.311388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.311396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.320761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.320780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.320788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.328946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.328966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.328973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.337927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.337947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.337954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.345756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.345776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.345784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.354344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.354363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.354371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.364595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.364615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.364623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.372849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.372869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.372879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.382854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.382874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.382881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.393088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.393109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.393116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.400643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.573 [2024-12-06 11:28:26.400663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.573 [2024-12-06 11:28:26.400670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.573 [2024-12-06 11:28:26.410787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.410806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.410813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 [2024-12-06 11:28:26.421246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.421265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.421273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 [2024-12-06 11:28:26.429174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.429195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.429202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 [2024-12-06 11:28:26.438914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.438933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.438940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 [2024-12-06 11:28:26.448999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.449018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.449025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 [2024-12-06 11:28:26.460401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.460425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.460433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 [2024-12-06 11:28:26.469003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.469023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.469032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 [2024-12-06 11:28:26.477652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.477672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.477679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 [2024-12-06 11:28:26.487112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.487132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.487140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 26775.00 IOPS, 104.59 MiB/s [2024-12-06T10:28:26.512Z] [2024-12-06 11:28:26.497172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13049a0) 00:26:53.574 [2024-12-06 11:28:26.497191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.574 [2024-12-06 11:28:26.497198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.574 00:26:53.574 Latency(us) 00:26:53.574 [2024-12-06T10:28:26.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:53.574 nvme0n1 : 2.00 26788.96 104.64 0.00 0.00 4773.77 2487.39 16443.58 00:26:53.574 [2024-12-06T10:28:26.512Z] =================================================================================================================== 00:26:53.574 [2024-12-06T10:28:26.512Z] Total : 26788.96 104.64 0.00 0.00 4773.77 2487.39 16443.58 00:26:53.574 { 00:26:53.574 "results": [ 00:26:53.574 { 00:26:53.574 "job": "nvme0n1", 00:26:53.574 "core_mask": "0x2", 00:26:53.574 "workload": "randread", 00:26:53.574 "status": "finished", 00:26:53.574 "queue_depth": 128, 00:26:53.574 "io_size": 4096, 00:26:53.574 "runtime": 2.003736, 00:26:53.574 "iops": 26788.95822603377, 00:26:53.574 "mibps": 104.64436807044441, 00:26:53.574 "io_failed": 0, 00:26:53.574 "io_timeout": 0, 00:26:53.574 "avg_latency_us": 4773.765360990959, 00:26:53.574 "min_latency_us": 2487.389090909091, 00:26:53.574 "max_latency_us": 16443.578181818182 00:26:53.574 } 00:26:53.574 ], 00:26:53.574 "core_count": 1 00:26:53.574 } 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:53.832 | .driver_specific 00:26:53.832 | .nvme_error 00:26:53.832 | .status_code 00:26:53.832 | .command_transient_transport_error' 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1875311 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1875311 ']' 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1875311 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1875311 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1875311' 00:26:53.832 killing process with pid 1875311 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1875311 00:26:53.832 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.832 00:26:53.832 Latency(us) 00:26:53.832 [2024-12-06T10:28:26.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.832 [2024-12-06T10:28:26.770Z] =================================================================================================================== 00:26:53.832 [2024-12-06T10:28:26.770Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.832 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1875311 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1875899 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1875899 /var/tmp/bperf.sock 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1875899 ']' 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.091 11:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.091 [2024-12-06 11:28:26.941689] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:26:54.091 [2024-12-06 11:28:26.941737] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875899 ] 00:26:54.091 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:54.091 Zero copy mechanism will not be used. 00:26:54.091 [2024-12-06 11:28:27.015169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.349 [2024-12-06 11:28:27.054592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.349 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.349 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:54.349 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.349 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.607 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:54.607 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.607 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.607 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.607 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.607 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.865 nvme0n1 00:26:54.865 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:54.865 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.865 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.865 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.865 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:54.865 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.124 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:55.124 Zero copy mechanism will not be used. 00:26:55.124 Running I/O for 2 seconds... 00:26:55.124 [2024-12-06 11:28:27.857677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.124 [2024-12-06 11:28:27.857712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.124 [2024-12-06 11:28:27.857722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.124 [2024-12-06 11:28:27.863046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.124 [2024-12-06 11:28:27.863076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.124 [2024-12-06 11:28:27.863085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.124 [2024-12-06 11:28:27.868187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.868209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.868221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.873299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.873321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.873328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.878463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.878484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.878491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.883526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.883547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.883555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.888790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.888812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.888820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.893919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.893940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.893948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.899083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.899104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.899111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.904192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.904213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.904221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.909311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.909331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.909338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.914341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.914365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.914372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.919301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.919321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.919329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.924223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.924243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.924251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.929416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.929438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.929445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.934441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.934463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.934470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.939481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.939503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.939510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.944565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.944586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.944594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.949739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.949761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.949770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.954796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.954817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.954825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.960190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.960211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.960219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.965441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.965463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.965470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.970650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.970671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.970679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.975768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.975789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.975796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.980761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.980781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.125 [2024-12-06 11:28:27.980789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.125 [2024-12-06 11:28:27.985807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.125 [2024-12-06 11:28:27.985828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:27.985836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:27.990894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:27.990914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:27.990922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:27.996024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:27.996044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:27.996051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.001041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.001069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.001080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.006203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.006224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.006231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.011725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.011745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.011753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.017079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.017100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.017107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.022224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.022245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.022252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.027632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.027652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.027660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.032875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.032897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.032904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.037954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.037976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.037983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.043074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.043094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.043101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.048083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.048109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.048116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.052990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.053011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.053018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.126 [2024-12-06 11:28:28.057908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.126 [2024-12-06 11:28:28.057929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.126 [2024-12-06 11:28:28.057937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.386 [2024-12-06 11:28:28.062924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.386 [2024-12-06 11:28:28.062945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.386 [2024-12-06 11:28:28.062952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.386 [2024-12-06 11:28:28.068053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.386 [2024-12-06 11:28:28.068080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.386 [2024-12-06 11:28:28.068088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.386 [2024-12-06 11:28:28.073198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.386 [2024-12-06 11:28:28.073218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.386 [2024-12-06 11:28:28.073226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.386 [2024-12-06 11:28:28.078306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.386 [2024-12-06 11:28:28.078327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.386 [2024-12-06 11:28:28.078334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.386 [2024-12-06 11:28:28.083193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.386 [2024-12-06 11:28:28.083214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.083222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.088172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.088193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.088201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.093031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.093052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.093065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.098064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.098086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.098093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.103075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.103095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.103103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.108045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.108076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.108083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.113083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.113104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.113112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.118134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.118154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.118162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.123166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.123188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.123196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.128260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.128281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.128288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.133247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.133267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.133278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.138232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.138253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.138260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.143200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.143221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.143228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.148129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.148150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.148157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.153074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.153095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.153102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.158070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.158091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.158098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.163103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.163123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.163130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.168287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.168307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.168314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.173282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.173303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.173310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.178443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.178463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.178471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.183573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.183594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.183601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.188598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.188621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.188628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.193621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.193642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.193649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.198686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.387 [2024-12-06 11:28:28.198707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.387 [2024-12-06 11:28:28.198714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.387 [2024-12-06 11:28:28.203689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.203710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.203717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.208835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.208856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.208864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.213800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.213821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.213828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.218809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.218830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.218841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.223852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.223873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.223880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.229041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.229067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.229074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.234240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.234260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.234268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.239475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.239495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.239503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.244765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.244785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.244793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.249913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.249934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.249942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.254942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.254962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.254970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.260021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.260041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.260048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.265321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.265345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.265352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.270350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.270372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.270379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.275434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.275455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.275462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.280497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.280518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.280526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.285843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.285863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.285871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.290948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.290968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.290976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.296011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.296031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.296038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.301567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.301588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.301595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.306595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.306616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.306624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.311617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.311637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.311645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.388 [2024-12-06 11:28:28.316704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.388 [2024-12-06 11:28:28.316724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.388 [2024-12-06 11:28:28.316732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.321845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.321866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.321874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.326765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.326785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.326793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.331891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.331911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.331918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.337157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.337177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.337185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.342524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.342545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.342552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.348132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.348152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.348160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.353431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.353451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.353462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.358535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.358555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.358562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.363667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.363688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.363696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.368865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.368886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.368893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.374097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.374118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.374125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.379187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.649 [2024-12-06 11:28:28.379207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.649 [2024-12-06 11:28:28.379215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.649 [2024-12-06 11:28:28.384185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.384206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.384213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.389348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.389369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.389376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.394474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.394495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.394502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.399527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.399552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.399559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.404671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.404692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.404700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.409583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.409605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.409613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.414452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.414473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.414480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.419324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.419345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.419352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.424423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.424444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.424451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.429526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.429546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.429554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.434496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.434516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.434523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.439475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.439495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.439506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.444660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.444680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.444688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.449894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.449915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.449923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.455370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.455391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.455399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.460751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.460771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.460778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.466166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.466187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.466194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.471230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.471250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.471258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.476322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.476342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.476349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.481319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.481340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.481347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.486370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.486394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.486402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.491455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.491476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.491484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.496681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.496702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.496709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.650 [2024-12-06 11:28:28.501817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.650 [2024-12-06 11:28:28.501838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.650 [2024-12-06 11:28:28.501845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.506890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.506911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.506918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.512129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.512155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.512162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.517363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.517384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.517391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.522477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.522498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.522505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.527482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.527501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.527509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.532537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.532558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.532566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.537553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.537573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.537581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.542592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.542612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.542619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.547691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.547711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.547718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.552710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.552730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.552737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.557838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.557858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.557866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.562941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.562963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.562971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.568073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.568093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.568101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.573431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.573452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.573462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.651 [2024-12-06 11:28:28.578993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.651 [2024-12-06 11:28:28.579014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.651 [2024-12-06 11:28:28.579021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.584732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.584753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.584761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.590698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.590719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.590727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.596751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.596772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.596779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.603258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.603281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.603289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.610603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.610624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.610632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.616874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.616896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.616904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.623811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.623832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.623840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.630577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.630604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.630611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.636674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.636695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.636703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.641267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.641288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.641296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.647861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.647882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.647890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.654865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.654886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.654894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.660829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.660851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.660859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.668223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.668244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.912 [2024-12-06 11:28:28.668252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.912 [2024-12-06 11:28:28.675500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.912 [2024-12-06 11:28:28.675522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.675530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.681573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.681594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.681602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.688537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.688558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.688565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.695373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.695394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.695402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.703189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.703210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.703219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.709653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.709675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.709682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.716323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.716344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.716352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.722312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.722334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.722341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.728859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.728880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.728888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.736664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.736687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.736695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.743091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.743113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.743126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.750168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.750189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.750196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.757700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.757722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.757730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.764908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.764930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.764937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.770486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.770506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.770514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.775577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.775598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.775605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.780293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.780313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.780320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.785303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.785322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.785329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.790358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.790379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.790387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.795418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.795442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.795450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.800541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.800562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.800569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.805688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.805707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.805714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.810820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.810841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.810848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.815469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.913 [2024-12-06 11:28:28.815489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.913 [2024-12-06 11:28:28.815497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.913 [2024-12-06 11:28:28.820421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.914 [2024-12-06 11:28:28.820441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.914 [2024-12-06 11:28:28.820448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.914 [2024-12-06 11:28:28.825364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.914 [2024-12-06 11:28:28.825385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.914 [2024-12-06 11:28:28.825393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.914 [2024-12-06 11:28:28.830165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.914 [2024-12-06 11:28:28.830185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.914 [2024-12-06 11:28:28.830192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.914 [2024-12-06 11:28:28.835023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.914 [2024-12-06 11:28:28.835043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.914 [2024-12-06 11:28:28.835051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.914 [2024-12-06 11:28:28.839885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.914 [2024-12-06 11:28:28.839906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.914 [2024-12-06 11:28:28.839913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.914 [2024-12-06 11:28:28.844852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:55.914 [2024-12-06 11:28:28.844873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.914 [2024-12-06 11:28:28.844880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.849887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.849906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.849913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.173 5777.00 IOPS, 722.12 MiB/s [2024-12-06T10:28:29.111Z] [2024-12-06 11:28:28.855966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.855988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.855996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.861094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.861115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.861122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.866343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.866365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.866372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.871551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.871571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.871579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.876736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.876757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.876764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.881916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.881942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.881949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.887302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.887334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.887341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.892435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.892455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.892463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.897519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.897539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.897546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.902692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.902712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.902719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.907760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.907781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.907789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.912918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.912938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.912945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.918020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.918041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.918048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.922777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.922798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.922806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.927788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.927808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.927816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.932599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.932619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.932626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.937394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.937415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.937422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.942423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.942444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.942451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.945753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.945772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.945779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.949764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.949786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.949793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.954910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.954930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.954938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.960292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.960313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.960321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.965755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.965776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.965787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.173 [2024-12-06 11:28:28.971291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.173 [2024-12-06 11:28:28.971312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.173 [2024-12-06 11:28:28.971319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:28.977709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:28.977730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:28.977737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:28.984952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:28.984974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:28.984982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:28.991345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:28.991366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:28.991374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:28.997604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:28.997625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:28.997632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.003226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.003246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.003253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.009283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.009304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.009312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.016733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.016755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.016763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.023605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.023630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.023638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.029784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.029804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.029812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.035769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.035790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.035797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.041259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.041280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.041287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.046879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.046900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.046908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.052769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.052789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.052798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.060022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.060042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.060050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.067004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.067026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.067033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.074391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.074411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.074419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.081639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.081661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.081669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.089352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.089374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.089382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.097552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.097575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.097582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.174 [2024-12-06 11:28:29.103757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.174 [2024-12-06 11:28:29.103778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.174 [2024-12-06 11:28:29.103785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.110046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.110074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.110081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.117051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.117077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.117085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.122639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.122659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.122667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.127787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.127807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.127814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.132883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.132904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.132914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.137982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.138002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.138010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.142599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.142619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.142627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.147540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.147560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.147568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.152548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.152569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.152576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.157434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.157454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.157461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.162288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.162308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.162316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.167469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.167490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.167497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.172402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.172421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.172428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.177365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.177385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.177392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.181814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.181835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.181842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.187371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.187394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.187401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.434 [2024-12-06 11:28:29.193103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.434 [2024-12-06 11:28:29.193125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.434 [2024-12-06 11:28:29.193133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.199661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.199683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.199691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.206695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.206716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.206724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.213257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.213280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.213287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.220226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.220248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.220256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.226624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.226647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.226658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.233703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.233725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.233732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.240517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.240540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.240547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.247363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.247385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.247392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.254011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.254032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.254041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.260967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.260988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.260995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.266415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.266436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.266443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.271512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.271533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.271540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.276589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.276610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.276618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.282220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.282245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.282253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.288783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.288805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.288813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.295644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.295666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.295674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.302294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.302316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.302323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.308355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.308376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.308384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.315019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.315040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.315047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.321802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.321823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.321830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.328658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.328680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.328688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.335551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.335572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.335580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.341778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.341798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.435 [2024-12-06 11:28:29.341806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.435 [2024-12-06 11:28:29.347874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.435 [2024-12-06 11:28:29.347895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.436 [2024-12-06 11:28:29.347902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.436 [2024-12-06 11:28:29.353832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.436 [2024-12-06 11:28:29.353852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.436 [2024-12-06 11:28:29.353860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.436 [2024-12-06 11:28:29.360296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.436 [2024-12-06 11:28:29.360317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.436 [2024-12-06 11:28:29.360324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.436 [2024-12-06 11:28:29.367637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.436 [2024-12-06 11:28:29.367658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.436 [2024-12-06 11:28:29.367665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.373927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.373949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.373956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.381257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.381279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.381287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.388687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.388708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.388716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.396169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.396191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.396202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.403428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.403449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.403457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.410687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.410708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.410715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.418096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.418118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.418125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.425578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.425600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.425607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.433017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.433038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.433046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.440582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.440603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.440611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.447943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.447965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.447972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.456572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.456594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.456602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.464573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.464599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.464607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.472522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.472543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.472552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.480501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.480522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.480530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.488259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.488281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.488288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.496354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.496376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.496383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.504828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.504850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.504857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.512436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.512458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.512465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.520537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.520558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.696 [2024-12-06 11:28:29.520565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.696 [2024-12-06 11:28:29.528262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.696 [2024-12-06 11:28:29.528283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.528297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.536068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.536089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.536097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.543788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.543809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.543817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.551202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.551223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.551231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.559243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.559264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.559271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.566671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.566692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.566699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.572139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.572160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.572168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.575248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.575269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.575277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.580280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.580300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.580307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.585193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.585217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.585224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.590136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.590156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.590164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.596205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.596226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.596234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.601383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.601403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.601411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.606424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.606445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.606452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.611544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.611564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.611572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.616646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.616666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.616673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.621720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.621740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.621747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.697 [2024-12-06 11:28:29.626839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.697 [2024-12-06 11:28:29.626859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.697 [2024-12-06 11:28:29.626867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.957 [2024-12-06 11:28:29.631918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.957 [2024-12-06 11:28:29.631937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.957 [2024-12-06 11:28:29.631945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.957 [2024-12-06 11:28:29.636999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.957 [2024-12-06 11:28:29.637019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.957 [2024-12-06 11:28:29.637026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.957 [2024-12-06 11:28:29.642084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.642104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.642111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.647121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.647141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.647148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.652160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.652180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.652188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.657208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.657229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.657236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.662212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.662232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.662239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.667223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.667243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.667251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.672257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.672278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.672288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.677297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.677317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.677324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.682328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.682348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.682355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.687306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.687327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.687334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.692300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.692320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.692327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.697274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.697294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.697301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.702269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.702289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.702296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.707289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.707309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.707317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.712342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.712362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.712369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.717336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.717360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.717367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.722378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.722398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.722405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.727409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.727429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.727437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.958 [2024-12-06 11:28:29.732441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.958 [2024-12-06 11:28:29.732460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.958 [2024-12-06 11:28:29.732468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.737442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.737461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.737468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.742451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.742471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.742478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.747445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.747465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.747472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.752530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.752550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.752557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.757883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.757903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.757910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.763751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.763771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.763778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.768793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.768812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.768820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.773767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.773787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.773794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.778857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.778877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.778885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.783869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.783889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.783896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.788894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.788914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.788921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.793871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.793890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.793897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.798884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.798904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.798911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.803899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.803919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.803930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.808881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.808901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.808908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.813939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.959 [2024-12-06 11:28:29.813959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.959 [2024-12-06 11:28:29.813966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.959 [2024-12-06 11:28:29.818958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.960 [2024-12-06 11:28:29.818978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.960 [2024-12-06 11:28:29.818985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.960 [2024-12-06 11:28:29.824018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.960 [2024-12-06 11:28:29.824037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.960 [2024-12-06 11:28:29.824044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.960 [2024-12-06 11:28:29.829073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.960 [2024-12-06 11:28:29.829093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.960 [2024-12-06 11:28:29.829100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.960 [2024-12-06 11:28:29.834099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.960 [2024-12-06 11:28:29.834118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.960 [2024-12-06 11:28:29.834125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.960 [2024-12-06 11:28:29.839074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.960 [2024-12-06 11:28:29.839095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.960 [2024-12-06 11:28:29.839102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:56.960 [2024-12-06 11:28:29.844232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.960 [2024-12-06 11:28:29.844250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.960 [2024-12-06 11:28:29.844258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.960 [2024-12-06 11:28:29.849381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.960 [2024-12-06 11:28:29.849404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.960 [2024-12-06 11:28:29.849411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:56.960 [2024-12-06 11:28:29.854549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x139a480) 00:26:56.960 [2024-12-06 11:28:29.854569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.960 [2024-12-06 11:28:29.854576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.960 5533.50 IOPS, 691.69 MiB/s 00:26:56.960 Latency(us) 00:26:56.960 [2024-12-06T10:28:29.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.960 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:56.960 nvme0n1 : 2.00 5533.40 691.68 0.00 0.00 2889.19 592.06 11021.96 00:26:56.960 [2024-12-06T10:28:29.898Z] =================================================================================================================== 00:26:56.960 [2024-12-06T10:28:29.898Z] Total : 5533.40 691.68 0.00 0.00 2889.19 592.06 11021.96 00:26:56.960 { 00:26:56.960 "results": [ 00:26:56.960 { 00:26:56.960 "job": "nvme0n1", 00:26:56.960 "core_mask": "0x2", 00:26:56.960 "workload": "randread", 00:26:56.960 "status": "finished", 00:26:56.960 "queue_depth": 16, 00:26:56.960 "io_size": 131072, 00:26:56.960 "runtime": 2.003107, 00:26:56.960 "iops": 5533.403857107983, 00:26:56.960 "mibps": 691.6754821384978, 00:26:56.960 "io_failed": 0, 00:26:56.960 "io_timeout": 0, 00:26:56.960 "avg_latency_us": 2889.187562087858, 00:26:56.960 "min_latency_us": 592.0581818181818, 00:26:56.960 "max_latency_us": 11021.963636363636 00:26:56.960 } 00:26:56.960 ], 00:26:56.960 "core_count": 1 00:26:56.960 } 00:26:56.960 11:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:56.960 11:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:56.960 11:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:56.960 | .driver_specific 00:26:56.960 | .nvme_error 00:26:56.960 | .status_code 00:26:56.960 | .command_transient_transport_error' 00:26:56.960 11:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 358 > 0 )) 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1875899 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1875899 ']' 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1875899 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1875899 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1875899' 00:26:57.218 killing process with pid 1875899 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1875899 00:26:57.218 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.218 00:26:57.218 Latency(us) 00:26:57.218 [2024-12-06T10:28:30.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.218 [2024-12-06T10:28:30.156Z] =================================================================================================================== 00:26:57.218 [2024-12-06T10:28:30.156Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.218 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1875899 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1876450 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1876450 /var/tmp/bperf.sock 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1876450 ']' 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:57.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.477 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.477 [2024-12-06 11:28:30.330299] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:26:57.477 [2024-12-06 11:28:30.330340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876450 ] 00:26:57.477 [2024-12-06 11:28:30.400583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.736 [2024-12-06 11:28:30.435398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.736 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.736 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:57.736 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.736 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.995 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:57.995 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.995 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.995 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.995 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.995 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.254 nvme0n1 00:26:58.254 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:58.254 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.254 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.254 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.254 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:58.254 11:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.254 Running I/O for 2 seconds... 00:26:58.254 [2024-12-06 11:28:31.068284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef1430 00:26:58.254 [2024-12-06 11:28:31.068993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.254 [2024-12-06 11:28:31.069023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:58.254 [2024-12-06 11:28:31.076240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efa3a0 00:26:58.254 [2024-12-06 11:28:31.076843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.076865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.084629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eff3c8 00:26:58.255 [2024-12-06 11:28:31.085233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.085253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.093568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eefae0 00:26:58.255 [2024-12-06 11:28:31.094290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.094310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.102456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efdeb0 00:26:58.255 [2024-12-06 11:28:31.103285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.103305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.111384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee3060 00:26:58.255 [2024-12-06 11:28:31.112337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.112357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.120242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee0a68 00:26:58.255 [2024-12-06 11:28:31.121280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.121299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.129081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eefae0 00:26:58.255 [2024-12-06 11:28:31.130240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.130258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.136579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee6738 00:26:58.255 [2024-12-06 11:28:31.137093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.137114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.145342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef7100 00:26:58.255 [2024-12-06 11:28:31.145912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.145931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.154194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee5a90 00:26:58.255 [2024-12-06 11:28:31.154878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.154896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.162457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efd208 00:26:58.255 [2024-12-06 11:28:31.163464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.163482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.170844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef7970 00:26:58.255 [2024-12-06 11:28:31.171664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.171682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.179725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc998 00:26:58.255 [2024-12-06 11:28:31.180655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.180673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:58.255 [2024-12-06 11:28:31.188567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef7100 00:26:58.255 [2024-12-06 11:28:31.189618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.255 [2024-12-06 11:28:31.189638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.195317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef1868 00:26:58.515 [2024-12-06 11:28:31.195910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.195928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.204666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eea680 00:26:58.515 [2024-12-06 11:28:31.205562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.205580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.214003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef7da8 00:26:58.515 [2024-12-06 11:28:31.215162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.215180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.222039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efb480 00:26:58.515 [2024-12-06 11:28:31.223182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.223200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.230456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef9b30 00:26:58.515 [2024-12-06 11:28:31.231555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.231573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.237656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efd640 00:26:58.515 [2024-12-06 11:28:31.238853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.238871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.244698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efbcf0 00:26:58.515 [2024-12-06 11:28:31.245260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.245277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.253368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eed4e8 00:26:58.515 [2024-12-06 11:28:31.254036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.254054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.262121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efb048 00:26:58.515 [2024-12-06 11:28:31.262909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.262929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.270791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef81e0 00:26:58.515 [2024-12-06 11:28:31.271690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.271708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.278558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ede470 00:26:58.515 [2024-12-06 11:28:31.279120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.279138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.287728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc998 00:26:58.515 [2024-12-06 11:28:31.288729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.288747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.296179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eed4e8 00:26:58.515 [2024-12-06 11:28:31.297167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.297186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.304056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efcdd0 00:26:58.515 [2024-12-06 11:28:31.304835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.304853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.313270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efcdd0 00:26:58.515 [2024-12-06 11:28:31.314502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.314520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.321904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eed0b0 00:26:58.515 [2024-12-06 11:28:31.323394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.323411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.327896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef81e0 00:26:58.515 [2024-12-06 11:28:31.328469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.328487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.337211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc560 00:26:58.515 [2024-12-06 11:28:31.338120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.338138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.345145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efd208 00:26:58.515 [2024-12-06 11:28:31.345932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.515 [2024-12-06 11:28:31.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:58.515 [2024-12-06 11:28:31.353212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:58.515 [2024-12-06 11:28:31.354000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.354018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.361612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc560 00:26:58.516 [2024-12-06 11:28:31.362374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.362392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.369631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016edece0 00:26:58.516 [2024-12-06 11:28:31.370063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.370082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.380016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef57b0 00:26:58.516 [2024-12-06 11:28:31.381349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.381365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.386008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee3d08 00:26:58.516 [2024-12-06 11:28:31.386701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.386718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.394330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee4de8 00:26:58.516 [2024-12-06 11:28:31.395000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.395018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.402175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef8a50 00:26:58.516 [2024-12-06 11:28:31.402769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.402786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.411825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeaab8 00:26:58.516 [2024-12-06 11:28:31.412719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.412738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.419655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef6458 00:26:58.516 [2024-12-06 11:28:31.420430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.420448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.428293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef7970 00:26:58.516 [2024-12-06 11:28:31.429306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.429323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.435902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef31b8 00:26:58.516 [2024-12-06 11:28:31.436584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.436602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:58.516 [2024-12-06 11:28:31.444242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eec408 00:26:58.516 [2024-12-06 11:28:31.444784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.516 [2024-12-06 11:28:31.444803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.452872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee99d8 00:26:58.776 [2024-12-06 11:28:31.453683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.453700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.460660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efe720 00:26:58.776 [2024-12-06 11:28:31.461437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.461455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.469829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efe2e8 00:26:58.776 [2024-12-06 11:28:31.470718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.470735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.478674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef1868 00:26:58.776 [2024-12-06 11:28:31.479799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.479822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.486767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ede470 00:26:58.776 [2024-12-06 11:28:31.487879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.487897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.494387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef3a28 00:26:58.776 [2024-12-06 11:28:31.495465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.495483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.503181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef3a28 00:26:58.776 [2024-12-06 11:28:31.504022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.504040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.510704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee12d8 00:26:58.776 [2024-12-06 11:28:31.511820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.511837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.518429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef4298 00:26:58.776 [2024-12-06 11:28:31.519009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.519026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.526897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee4578 00:26:58.776 [2024-12-06 11:28:31.527568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.527585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.535784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef6cc8 00:26:58.776 [2024-12-06 11:28:31.536689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.536707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.543597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef1868 00:26:58.776 [2024-12-06 11:28:31.544364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.544382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.552547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee5a90 00:26:58.776 [2024-12-06 11:28:31.553454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.553472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.560227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee9e10 00:26:58.776 [2024-12-06 11:28:31.561385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.561403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.567319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eecc78 00:26:58.776 [2024-12-06 11:28:31.567885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.567902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:58.776 [2024-12-06 11:28:31.575898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef4b08 00:26:58.776 [2024-12-06 11:28:31.576620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.776 [2024-12-06 11:28:31.576638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.586130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee5658 00:26:58.777 [2024-12-06 11:28:31.587149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.587168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.595132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee38d0 00:26:58.777 [2024-12-06 11:28:31.596364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.596382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.600733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef0bc0 00:26:58.777 [2024-12-06 11:28:31.601315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.601333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.609474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef6020 00:26:58.777 [2024-12-06 11:28:31.610160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.610178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.619062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eea248 00:26:58.777 [2024-12-06 11:28:31.619954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.619971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.626695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef1ca0 00:26:58.777 [2024-12-06 11:28:31.627443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.627460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.634593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef7970 00:26:58.777 [2024-12-06 11:28:31.635367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.635384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.644637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeb760 00:26:58.777 [2024-12-06 11:28:31.645832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.645849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.653322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee5a90 00:26:58.777 [2024-12-06 11:28:31.654754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.654772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.659459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efda78 00:26:58.777 [2024-12-06 11:28:31.660143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.660161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.669184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef4b08 00:26:58.777 [2024-12-06 11:28:31.670267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.670284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.676498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee01f8 00:26:58.777 [2024-12-06 11:28:31.677260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.677277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.684238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef20d8 00:26:58.777 [2024-12-06 11:28:31.684789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.684806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.692704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efe720 00:26:58.777 [2024-12-06 11:28:31.693371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.693403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.701535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee3498 00:26:58.777 [2024-12-06 11:28:31.702420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.702438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:58.777 [2024-12-06 11:28:31.709996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef2d80 00:26:58.777 [2024-12-06 11:28:31.710894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.777 [2024-12-06 11:28:31.710912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:59.036 [2024-12-06 11:28:31.718086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef4b08 00:26:59.036 [2024-12-06 11:28:31.718850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.718868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.726087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef0788 00:26:59.037 [2024-12-06 11:28:31.726502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.726519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.735916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee4578 00:26:59.037 [2024-12-06 11:28:31.737107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.737125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.744310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeb760 00:26:59.037 [2024-12-06 11:28:31.745498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.745516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.750011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee23b8 00:26:59.037 [2024-12-06 11:28:31.750558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.750576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.759893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeea00 00:26:59.037 [2024-12-06 11:28:31.760867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.760885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.768263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016edf118 00:26:59.037 [2024-12-06 11:28:31.768905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.768922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.775765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee12d8 00:26:59.037 [2024-12-06 11:28:31.776436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.776454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.784034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef8618 00:26:59.037 [2024-12-06 11:28:31.784802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.784820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.792785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef8a50 00:26:59.037 [2024-12-06 11:28:31.793696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.793714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.801416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee4578 00:26:59.037 [2024-12-06 11:28:31.802405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.802423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.809793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eebfd0 00:26:59.037 [2024-12-06 11:28:31.810442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.810460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.817833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efb048 00:26:59.037 [2024-12-06 11:28:31.818804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.818822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.826010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeb760 00:26:59.037 [2024-12-06 11:28:31.826818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.826836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.834888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef2510 00:26:59.037 [2024-12-06 11:28:31.835795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.835813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.843725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef81e0 00:26:59.037 [2024-12-06 11:28:31.844836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.844854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.851330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef5378 00:26:59.037 [2024-12-06 11:28:31.852016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.852034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.858970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef1868 00:26:59.037 [2024-12-06 11:28:31.859676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.859693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.868978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efdeb0 00:26:59.037 [2024-12-06 11:28:31.870008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.870025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.877205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efda78 00:26:59.037 [2024-12-06 11:28:31.878339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.878356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.884747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eec408 00:26:59.037 [2024-12-06 11:28:31.885985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.886003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.891767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee9168 00:26:59.037 [2024-12-06 11:28:31.892374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.892391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.900463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee73e0 00:26:59.037 [2024-12-06 11:28:31.901179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.037 [2024-12-06 11:28:31.901197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:59.037 [2024-12-06 11:28:31.909039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef4f40 00:26:59.037 [2024-12-06 11:28:31.909865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.038 [2024-12-06 11:28:31.909882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:59.038 [2024-12-06 11:28:31.917666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016edf988 00:26:59.038 [2024-12-06 11:28:31.918591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.038 [2024-12-06 11:28:31.918609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:59.038 [2024-12-06 11:28:31.926051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeb760 00:26:59.038 [2024-12-06 11:28:31.926973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.038 [2024-12-06 11:28:31.926990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:59.038 [2024-12-06 11:28:31.934070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef3a28 00:26:59.038 [2024-12-06 11:28:31.934984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.038 [2024-12-06 11:28:31.935001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:59.038 [2024-12-06 11:28:31.942670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef4b08 00:26:59.038 [2024-12-06 11:28:31.943722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.038 [2024-12-06 11:28:31.943740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:59.038 [2024-12-06 11:28:31.950957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eff3c8 00:26:59.038 [2024-12-06 11:28:31.951648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.038 [2024-12-06 11:28:31.951667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:59.038 [2024-12-06 11:28:31.958950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee9e10 00:26:59.038 [2024-12-06 11:28:31.959949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.038 [2024-12-06 11:28:31.959967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:59.038 [2024-12-06 11:28:31.967136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee1b48 00:26:59.038 [2024-12-06 11:28:31.967943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.038 [2024-12-06 11:28:31.967976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:31.975946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee73e0 00:26:59.297 [2024-12-06 11:28:31.976892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:31.976910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:31.984752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc998 00:26:59.297 [2024-12-06 11:28:31.985913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:31.985933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:31.991350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee01f8 00:26:59.297 [2024-12-06 11:28:31.992070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:31.992087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.000280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.297 [2024-12-06 11:28:32.000872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.000890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.009765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee9e10 00:26:59.297 [2024-12-06 11:28:32.011016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.011033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.015805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efb480 00:26:59.297 [2024-12-06 11:28:32.016436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.016454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.025883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eec408 00:26:59.297 [2024-12-06 11:28:32.026968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.026985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.033607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc998 00:26:59.297 [2024-12-06 11:28:32.034416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.034434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.042022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee99d8 00:26:59.297 [2024-12-06 11:28:32.042898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.042915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.052184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efe720 00:26:59.297 [2024-12-06 11:28:32.053610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.053627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.058081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef1868 00:26:59.297 [2024-12-06 11:28:32.058630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.058648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:59.297 30588.00 IOPS, 119.48 MiB/s [2024-12-06T10:28:32.235Z] [2024-12-06 11:28:32.066703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee5220 00:26:59.297 [2024-12-06 11:28:32.067253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.067272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.075193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef4298 00:26:59.297 [2024-12-06 11:28:32.075855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.075874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.084620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef92c0 00:26:59.297 [2024-12-06 11:28:32.085813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.297 [2024-12-06 11:28:32.085831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:59.297 [2024-12-06 11:28:32.092498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeb760 00:26:59.297 [2024-12-06 11:28:32.093412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.093430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.100987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ede038 00:26:59.298 [2024-12-06 11:28:32.101634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.101652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.109612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efd640 00:26:59.298 [2024-12-06 11:28:32.110537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.110554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.117784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eedd58 00:26:59.298 [2024-12-06 11:28:32.118746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.118763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.126008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee5220 00:26:59.298 [2024-12-06 11:28:32.126978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.126995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.134233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef8e88 00:26:59.298 [2024-12-06 11:28:32.135177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.135194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.141848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee5658 00:26:59.298 [2024-12-06 11:28:32.142967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.142984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.149649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef5378 00:26:59.298 [2024-12-06 11:28:32.150188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.150206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.157940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeea00 00:26:59.298 [2024-12-06 11:28:32.158492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.158509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.166412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeee38 00:26:59.298 [2024-12-06 11:28:32.167061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.167078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.174285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee6fa8 00:26:59.298 [2024-12-06 11:28:32.174921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.174939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.182947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eefae0 00:26:59.298 [2024-12-06 11:28:32.183722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.183739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.192176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eee190 00:26:59.298 [2024-12-06 11:28:32.193140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.193157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.200408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef31b8 00:26:59.298 [2024-12-06 11:28:32.201375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.201396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.208644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee95a0 00:26:59.298 [2024-12-06 11:28:32.209615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.209633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.216943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef1430 00:26:59.298 [2024-12-06 11:28:32.217909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.217927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.225174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016edf550 00:26:59.298 [2024-12-06 11:28:32.226117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.298 [2024-12-06 11:28:32.226134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.298 [2024-12-06 11:28:32.232868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeaab8 00:26:59.557 [2024-12-06 11:28:32.234031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.557 [2024-12-06 11:28:32.234051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.557 [2024-12-06 11:28:32.241478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef6890 00:26:59.557 [2024-12-06 11:28:32.242269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.557 [2024-12-06 11:28:32.242288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.557 [2024-12-06 11:28:32.249699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee7c50 00:26:59.557 [2024-12-06 11:28:32.250596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.557 [2024-12-06 11:28:32.250613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.557 [2024-12-06 11:28:32.259732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef3a28 00:26:59.557 [2024-12-06 11:28:32.261097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.557 [2024-12-06 11:28:32.261114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.557 [2024-12-06 11:28:32.265721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efac10 00:26:59.557 [2024-12-06 11:28:32.266263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.557 [2024-12-06 11:28:32.266281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.557 [2024-12-06 11:28:32.274621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef6458 00:26:59.558 [2024-12-06 11:28:32.275163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.275182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.283531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eee5c8 00:26:59.558 [2024-12-06 11:28:32.284201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.284220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.293703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeaef0 00:26:59.558 [2024-12-06 11:28:32.294778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.294796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.300319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efa3a0 00:26:59.558 [2024-12-06 11:28:32.300865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.300884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.310126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc560 00:26:59.558 [2024-12-06 11:28:32.311088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.311106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.317921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016edf118 00:26:59.558 [2024-12-06 11:28:32.318725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.318744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.326097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc560 00:26:59.558 [2024-12-06 11:28:32.326721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.326739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.335263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef20d8 00:26:59.558 [2024-12-06 11:28:32.336253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.336272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.343272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeaef0 00:26:59.558 [2024-12-06 11:28:32.344230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.344248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.352119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee7c50 00:26:59.558 [2024-12-06 11:28:32.352972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.352990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.360534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee7c50 00:26:59.558 [2024-12-06 11:28:32.361555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.361573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.368906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee7c50 00:26:59.558 [2024-12-06 11:28:32.369850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.369868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.377149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee7c50 00:26:59.558 [2024-12-06 11:28:32.378087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.378105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.384992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efe720 00:26:59.558 [2024-12-06 11:28:32.385874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.385892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.393176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efb480 00:26:59.558 [2024-12-06 11:28:32.394040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.394062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.403005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eeb328 00:26:59.558 [2024-12-06 11:28:32.404403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.404421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.409022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee27f0 00:26:59.558 [2024-12-06 11:28:32.409664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.409682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.418249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee4de8 00:26:59.558 [2024-12-06 11:28:32.419022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.419042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.426842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef46d0 00:26:59.558 [2024-12-06 11:28:32.427808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.427826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.434636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee6b70 00:26:59.558 [2024-12-06 11:28:32.435576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.435594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.443031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef6458 00:26:59.558 [2024-12-06 11:28:32.443955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.443974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.451233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef31b8 00:26:59.558 [2024-12-06 11:28:32.452006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.452024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.458893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ede8a8 00:26:59.558 [2024-12-06 11:28:32.459543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.459562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:59.558 [2024-12-06 11:28:32.466729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee4140 00:26:59.558 [2024-12-06 11:28:32.467263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.558 [2024-12-06 11:28:32.467281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:59.559 [2024-12-06 11:28:32.476613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef6890 00:26:59.559 [2024-12-06 11:28:32.477631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.559 [2024-12-06 11:28:32.477650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:59.559 [2024-12-06 11:28:32.483196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee38d0 00:26:59.559 [2024-12-06 11:28:32.483717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.559 [2024-12-06 11:28:32.483734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:59.559 [2024-12-06 11:28:32.492300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef8a50 00:26:59.559 [2024-12-06 11:28:32.492714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.559 [2024-12-06 11:28:32.492733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.817 [2024-12-06 11:28:32.500687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef31b8 00:26:59.817 [2024-12-06 11:28:32.501353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.817 [2024-12-06 11:28:32.501371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:59.817 [2024-12-06 11:28:32.510141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee6300 00:26:59.817 [2024-12-06 11:28:32.511284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.511302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.517689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eee190 00:26:59.818 [2024-12-06 11:28:32.518853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.518871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.526055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee6300 00:26:59.818 [2024-12-06 11:28:32.526946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.526963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.535476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eddc00 00:26:59.818 [2024-12-06 11:28:32.536592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.536610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.542128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eea680 00:26:59.818 [2024-12-06 11:28:32.542548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.542566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.551702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eea248 00:26:59.818 [2024-12-06 11:28:32.552744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.552762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.558468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee4578 00:26:59.818 [2024-12-06 11:28:32.558986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.559004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.566914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef2d80 00:26:59.818 [2024-12-06 11:28:32.567549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.567567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.575514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef57b0 00:26:59.818 [2024-12-06 11:28:32.576367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.576385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.583911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eecc78 00:26:59.818 [2024-12-06 11:28:32.584641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.584659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.593113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efd640 00:26:59.818 [2024-12-06 11:28:32.594398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.594416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.600617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eef270 00:26:59.818 [2024-12-06 11:28:32.601247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.601265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.609106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef2510 00:26:59.818 [2024-12-06 11:28:32.609989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.610007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.617017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ee1b48 00:26:59.818 [2024-12-06 11:28:32.617945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.617962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.625552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.818 [2024-12-06 11:28:32.626045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.626068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.633539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016eec840 00:26:59.818 [2024-12-06 11:28:32.634260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.634280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.641876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ef46d0 00:26:59.818 [2024-12-06 11:28:32.642594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.642612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.650260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016ede038 00:26:59.818 [2024-12-06 11:28:32.650980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.650998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.658749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.818 [2024-12-06 11:28:32.659764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.659783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.667395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.818 [2024-12-06 11:28:32.667523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.667542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.676011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.818 [2024-12-06 11:28:32.676149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.676166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.684613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.818 [2024-12-06 11:28:32.684743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.684762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.693260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.818 [2024-12-06 11:28:32.693390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.693409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.818 [2024-12-06 11:28:32.701854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.818 [2024-12-06 11:28:32.701986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.818 [2024-12-06 11:28:32.702004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.819 [2024-12-06 11:28:32.710487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.819 [2024-12-06 11:28:32.710618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.819 [2024-12-06 11:28:32.710637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.819 [2024-12-06 11:28:32.719175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.819 [2024-12-06 11:28:32.719307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.819 [2024-12-06 11:28:32.719326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.819 [2024-12-06 11:28:32.727754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.819 [2024-12-06 11:28:32.727884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.819 [2024-12-06 11:28:32.727901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.819 [2024-12-06 11:28:32.736358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.819 [2024-12-06 11:28:32.736487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.819 [2024-12-06 11:28:32.736506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.819 [2024-12-06 11:28:32.744964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:26:59.819 [2024-12-06 11:28:32.745101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.819 [2024-12-06 11:28:32.745119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.819 [2024-12-06 11:28:32.753679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.753810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.753829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.762472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.762599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.762616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.771074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.771205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.771222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.779778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.779909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.779928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.788482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.788610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.788628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.797075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.797206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.797225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.805647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.805775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.805793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.814427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.814558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.814576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.823052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.823188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.823206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.831627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.831757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.831775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.840267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.840395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.840412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.848825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.848958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.848975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.857717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.857851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.857869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.866422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.866553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.866571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.875160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.875293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.875310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.883852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.078 [2024-12-06 11:28:32.883982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.078 [2024-12-06 11:28:32.883999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.078 [2024-12-06 11:28:32.892456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.892585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.892603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.901034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.901173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.901191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.909642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.909771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.909790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.918260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.918390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.918408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.926903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.927031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.927048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.935482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.935611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.935632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.944098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.944229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.944247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.952715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.952845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.952862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.961352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.961482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.961499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.969947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.970075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.970093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.978576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.978704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.978722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.987161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.987292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.987310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:32.995788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:32.995917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:32.995935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:33.004380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:33.004510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:33.004528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.079 [2024-12-06 11:28:33.013074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.079 [2024-12-06 11:28:33.013210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.079 [2024-12-06 11:28:33.013227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.338 [2024-12-06 11:28:33.021900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.338 [2024-12-06 11:28:33.022031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.338 [2024-12-06 11:28:33.022048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.338 [2024-12-06 11:28:33.030490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.339 [2024-12-06 11:28:33.030620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.339 [2024-12-06 11:28:33.030637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.339 [2024-12-06 11:28:33.039129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.339 [2024-12-06 11:28:33.039259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.339 [2024-12-06 11:28:33.039277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.339 [2024-12-06 11:28:33.047742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.339 [2024-12-06 11:28:33.047873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.339 [2024-12-06 11:28:33.047890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.339 [2024-12-06 11:28:33.056322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.339 [2024-12-06 11:28:33.056451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.339 [2024-12-06 11:28:33.056470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.339 [2024-12-06 11:28:33.064953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9918f0) with pdu=0x200016efc128 00:27:00.339 30438.00 IOPS, 118.90 MiB/s [2024-12-06T10:28:33.277Z] [2024-12-06 11:28:33.065298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.339 [2024-12-06 11:28:33.065315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:00.339 00:27:00.339 Latency(us) 00:27:00.339 [2024-12-06T10:28:33.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.339 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:00.339 nvme0n1 : 2.01 30435.40 118.89 0.00 0.00 4198.70 1645.85 11439.01 00:27:00.339 [2024-12-06T10:28:33.277Z] =================================================================================================================== 00:27:00.339 [2024-12-06T10:28:33.277Z] Total : 30435.40 118.89 0.00 0.00 4198.70 1645.85 11439.01 00:27:00.339 { 00:27:00.339 "results": [ 00:27:00.339 { 00:27:00.339 "job": "nvme0n1", 00:27:00.339 "core_mask": "0x2", 00:27:00.339 "workload": "randwrite", 00:27:00.339 "status": "finished", 00:27:00.339 "queue_depth": 128, 00:27:00.339 "io_size": 4096, 00:27:00.339 "runtime": 2.005691, 00:27:00.339 "iops": 30435.396080453072, 00:27:00.339 "mibps": 118.88826593926981, 00:27:00.339 "io_failed": 0, 00:27:00.339 "io_timeout": 0, 00:27:00.339 "avg_latency_us": 4198.703593116143, 00:27:00.339 "min_latency_us": 1645.8472727272726, 00:27:00.339 "max_latency_us": 11439.01090909091 00:27:00.339 } 00:27:00.339 ], 00:27:00.339 "core_count": 1 00:27:00.339 } 00:27:00.339 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:00.339 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:00.339 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:00.339 | .driver_specific 00:27:00.339 | .nvme_error 00:27:00.339 | .status_code 00:27:00.339 | .command_transient_transport_error' 00:27:00.339 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 239 > 0 )) 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1876450 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1876450 ']' 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1876450 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1876450 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1876450' 00:27:00.598 killing process with pid 1876450 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1876450 00:27:00.598 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.598 00:27:00.598 Latency(us) 00:27:00.598 [2024-12-06T10:28:33.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.598 [2024-12-06T10:28:33.536Z] =================================================================================================================== 00:27:00.598 [2024-12-06T10:28:33.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1876450 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1876988 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1876988 /var/tmp/bperf.sock 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1876988 ']' 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:00.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.598 11:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.857 [2024-12-06 11:28:33.534638] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:27:00.857 [2024-12-06 11:28:33.534685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876988 ] 00:27:00.857 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:00.857 Zero copy mechanism will not be used. 00:27:00.857 [2024-12-06 11:28:33.608087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.857 [2024-12-06 11:28:33.644749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.424 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.424 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:01.424 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:01.424 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:01.683 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:01.683 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.683 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.683 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.683 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.683 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.252 nvme0n1 00:27:02.252 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:02.252 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.252 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.252 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.252 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:02.252 11:28:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:02.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:02.252 Zero copy mechanism will not be used. 00:27:02.252 Running I/O for 2 seconds... 00:27:02.252 [2024-12-06 11:28:35.054239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.054310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.054336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.059947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.060008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.060030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.064331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.064447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.064465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.069485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.069540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.069559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.073933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.073989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.074008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.078156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.078231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.078249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.082300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.082355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.082372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.086489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.086544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.086561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.090707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.090774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.090792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.094941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.095000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.095020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.099108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.099168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.099185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.103203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.103262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.103279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.107449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.252 [2024-12-06 11:28:35.107503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.252 [2024-12-06 11:28:35.107520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.252 [2024-12-06 11:28:35.111581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.111637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.111654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.115737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.115795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.115812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.119829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.119892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.119909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.123895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.123942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.123959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.128087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.128152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.128169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.132935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.132990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.133007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.137894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.137960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.137976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.143258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.143309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.143327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.148055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.148113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.148130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.152977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.153031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.153048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.158212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.158264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.158281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.162828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.162891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.162908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.167408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.167474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.167491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.171851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.171920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.171937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.175997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.176050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.176073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.180594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.180645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.180662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.253 [2024-12-06 11:28:35.185434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.253 [2024-12-06 11:28:35.185524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.253 [2024-12-06 11:28:35.185541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.189668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.189723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.189739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.194072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.194184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.194201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.198515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.198581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.198599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.203235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.203307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.203324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.208175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.208253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.208270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.212725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.212821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.212842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.217203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.217255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.217272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.221892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.221947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.221964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.226902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.227026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.227043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.231901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.231963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.231980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.236227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.236294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.236311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.240644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.240699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.512 [2024-12-06 11:28:35.240716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.512 [2024-12-06 11:28:35.244820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.512 [2024-12-06 11:28:35.244876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.244892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.249013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.249089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.249106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.253156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.253216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.253233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.257282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.257389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.257407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.261412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.261475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.261492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.265510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.265562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.265579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.269622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.269676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.269693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.273931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.273997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.274014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.278055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.278125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.278141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.282201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.282265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.282282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.286357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.286409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.286426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.290481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.290544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.290561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.294655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.294748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.294766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.299280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.299392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.299409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.305440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.305625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.305643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.311056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.311162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.311180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.316825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.316914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.316932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.322189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.322353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.322370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.328325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.328416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.328433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.334737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.334852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.334874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.340519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.340572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.340590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.345654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.345742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.345760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.350422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.350473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.350490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.355592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.355644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.355662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.360587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.513 [2024-12-06 11:28:35.360728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.513 [2024-12-06 11:28:35.360744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.513 [2024-12-06 11:28:35.366848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.366920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.366937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.372085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.372138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.372155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.377075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.377128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.377145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.381802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.381944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.381961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.386995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.387047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.387071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.391984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.392035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.392052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.396912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.397045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.397070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.401975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.402030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.402048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.407013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.407120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.407137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.412169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.412220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.412237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.417394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.417449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.417466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.422099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.422198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.422214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.427197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.427251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.427269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.432290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.432374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.432392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.437464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.437548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.437566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.442302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.442392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.442409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.514 [2024-12-06 11:28:35.447403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.514 [2024-12-06 11:28:35.447498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.514 [2024-12-06 11:28:35.447516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.452226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.773 [2024-12-06 11:28:35.452279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.773 [2024-12-06 11:28:35.452297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.456847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.773 [2024-12-06 11:28:35.456910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.773 [2024-12-06 11:28:35.456928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.461402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.773 [2024-12-06 11:28:35.461493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.773 [2024-12-06 11:28:35.461510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.466619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.773 [2024-12-06 11:28:35.466681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.773 [2024-12-06 11:28:35.466700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.472262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.773 [2024-12-06 11:28:35.472312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.773 [2024-12-06 11:28:35.472330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.477207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.773 [2024-12-06 11:28:35.477257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.773 [2024-12-06 11:28:35.477274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.481687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.773 [2024-12-06 11:28:35.481771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.773 [2024-12-06 11:28:35.481788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.486107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.773 [2024-12-06 11:28:35.486166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.773 [2024-12-06 11:28:35.486183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.490480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.773 [2024-12-06 11:28:35.490567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.773 [2024-12-06 11:28:35.490584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.773 [2024-12-06 11:28:35.495025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.495089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.495107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.499860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.499949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.499967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.504614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.504774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.504791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.510025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.510095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.510112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.514352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.514405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.514423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.518623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.518694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.518711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.522797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.522862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.522879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.526941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.526999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.527016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.531129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.531198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.531214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.535244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.535315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.535333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.539380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.539441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.539458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.543518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.543574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.543591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.547644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.547700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.547717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.551770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.551825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.551842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.555879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.555936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.555953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.560234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.560289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.560307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.564508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.564564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.564582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.568735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.568788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.568806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.572912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.572970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.572987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.577165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.577219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.577236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.581403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.581460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.581482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.585652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.585718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.585737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.590019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.590080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.590098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.594174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.774 [2024-12-06 11:28:35.594232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.774 [2024-12-06 11:28:35.594249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.774 [2024-12-06 11:28:35.598265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.598317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.598334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.602397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.602460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.602478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.606594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.606661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.606679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.610724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.610776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.610793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.615032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.615098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.615116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.619365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.619429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.619447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.623693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.623759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.623776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.628570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.628653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.628671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.634487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.634670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.634687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.641245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.641366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.641383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.647028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.647116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.647133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.652889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.652995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.653012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.659035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.659136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.659154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.664858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.664950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.664968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.671247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.671384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.671403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.678098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.678244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.678263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.685422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.685608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.685626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.692851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.693006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.693025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.699047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.699128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.699146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.775 [2024-12-06 11:28:35.704387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:02.775 [2024-12-06 11:28:35.704477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.775 [2024-12-06 11:28:35.704495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.710210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.710519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.710540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.715087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.715146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.715163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.719551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.719662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.719684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.724489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.724554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.724571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.729645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.729704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.729722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.734447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.734502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.734520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.739399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.739488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.739506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.744278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.744407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.744423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.749435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.749491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.749508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.754219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.754303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.754320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.759140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.759216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.759235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.763578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.763637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.763657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.767934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.768022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.768039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.772209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.772262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.772279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.776520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.776632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.776649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.780918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.781008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.781025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.785462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.785518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.785535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.789852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.789903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.789920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.794630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.794684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.794702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.799142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.799195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.799212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.803295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.803354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.803371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.807401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.807456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.807473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.811607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.811671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.811689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.815779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.815868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.815885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.819974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.820074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.820091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.824198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.824268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.824284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.828368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.828434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.828451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.832551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.832603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.832620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.836742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.836793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.836810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.840862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.840925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.840943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.844997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.845046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.845068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.849161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.849217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.849234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.853733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.853783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.853800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.858456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.858548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.858565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.863717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.863820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.863837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.868246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.868297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.868313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.872682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.872771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.872787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.877196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.877290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.877310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.881651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.881710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.881727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.885788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.885855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.885872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.889992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.890045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.890069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.894195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.894259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.894278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.898430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.898497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.898514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.902563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.902626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.902642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.906717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.906770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.906787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.910893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.910988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.911005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.915009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.915076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.915093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.919170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.919319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.919336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.923709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.923772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.923789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.927828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.927894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.927911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.931941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.932009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.932026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.936006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.936088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.936105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.940621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.035 [2024-12-06 11:28:35.940711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.035 [2024-12-06 11:28:35.940728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.035 [2024-12-06 11:28:35.944878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.036 [2024-12-06 11:28:35.944932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.036 [2024-12-06 11:28:35.944949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.036 [2024-12-06 11:28:35.949415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.036 [2024-12-06 11:28:35.949486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.036 [2024-12-06 11:28:35.949503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.036 [2024-12-06 11:28:35.954274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.036 [2024-12-06 11:28:35.954345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.036 [2024-12-06 11:28:35.954362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.036 [2024-12-06 11:28:35.959204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.036 [2024-12-06 11:28:35.959276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.036 [2024-12-06 11:28:35.959294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.036 [2024-12-06 11:28:35.965011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.036 [2024-12-06 11:28:35.965102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.036 [2024-12-06 11:28:35.965119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.036 [2024-12-06 11:28:35.969985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.036 [2024-12-06 11:28:35.970035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.036 [2024-12-06 11:28:35.970052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:35.974485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:35.974590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:35.974607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:35.979006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:35.979123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:35.979140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:35.983259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:35.983313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:35.983330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:35.987396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:35.987446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:35.987464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:35.991827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:35.991881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:35.991901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:35.996421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:35.996530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:35.996548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.001248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.001370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.001387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.005775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.005968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.005986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.010568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.010790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.010809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.015133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.015347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.015365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.020199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.020420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.020439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.024782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.025001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.025019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.030065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.030290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.030309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.034637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.034856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.034875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.039354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.039576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.039596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.044251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.044474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.044494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.295 6568.00 IOPS, 821.00 MiB/s [2024-12-06T10:28:36.233Z] [2024-12-06 11:28:36.049697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.049749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.049767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.054594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.054649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.054666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.059599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.059769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.059790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.064478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.064529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.064547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.069903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.069960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.069979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.074915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.074967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.074984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.079700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.079828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.079845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.084561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.084626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.084644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.089216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.089275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.089292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.094012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.094067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.094086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.099071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.295 [2024-12-06 11:28:36.099158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.295 [2024-12-06 11:28:36.099176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.295 [2024-12-06 11:28:36.104088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.104151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.104169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.109082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.109142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.109159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.113616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.113667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.113684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.118348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.118403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.118426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.122708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.122760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.122777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.126976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.127092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.127109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.131495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.131579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.131596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.136090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.136195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.136212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.140681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.140746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.140764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.145215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.145281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.145299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.149804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.149857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.149875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.154223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.154272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.154290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.158545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.158604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.158622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.162963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.163020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.163037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.168017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.168122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.168140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.173136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.173191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.173208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.178093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.178178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.178196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.182592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.182666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.182684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.187857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.187909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.187926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.192739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.192796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.192813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.197746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.197798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.197815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.202689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.202785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.202802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.207392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.207444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.207460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.212596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.212700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.212716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.218712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.218765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.218783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.223236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.223286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.223303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.296 [2024-12-06 11:28:36.227979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.296 [2024-12-06 11:28:36.228032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.296 [2024-12-06 11:28:36.228049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.555 [2024-12-06 11:28:36.232458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.555 [2024-12-06 11:28:36.232519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.555 [2024-12-06 11:28:36.232536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.555 [2024-12-06 11:28:36.236763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.555 [2024-12-06 11:28:36.236812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.555 [2024-12-06 11:28:36.236830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.555 [2024-12-06 11:28:36.241257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.555 [2024-12-06 11:28:36.241332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.555 [2024-12-06 11:28:36.241353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.555 [2024-12-06 11:28:36.245736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.555 [2024-12-06 11:28:36.245818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.555 [2024-12-06 11:28:36.245835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.555 [2024-12-06 11:28:36.250448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.555 [2024-12-06 11:28:36.250547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.555 [2024-12-06 11:28:36.250564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.555 [2024-12-06 11:28:36.254905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.555 [2024-12-06 11:28:36.254969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.555 [2024-12-06 11:28:36.254987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.555 [2024-12-06 11:28:36.259381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.555 [2024-12-06 11:28:36.259436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.555 [2024-12-06 11:28:36.259453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.555 [2024-12-06 11:28:36.264159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.555 [2024-12-06 11:28:36.264260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.555 [2024-12-06 11:28:36.264277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.268802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.268863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.268880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.273326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.273425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.273442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.278021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.278076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.278094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.282228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.282281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.282298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.286705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.286792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.286809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.291817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.291922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.291939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.297435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.297506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.297524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.302525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.302586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.302603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.307442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.307520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.307537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.311945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.312029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.312047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.316604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.316688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.316706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.321257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.321368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.321385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.325538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.325599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.325616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.329806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.329910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.329926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.334590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.334663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.334682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.338825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.338887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.338904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.343033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.343093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.343110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.347366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.347431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.347448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.351554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.351605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.351622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.355714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.355777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.355794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.360114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.360181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.360200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.364916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.364968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.364985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.370033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.370161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.370179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.374736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.374858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.374875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.379516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.379566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.379583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.384684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.384733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.384750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.389277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.389326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.389343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.393807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.393921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.393939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.398860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.398914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.398932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.403790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.403851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.403867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.408764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.408816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.408833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.413550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.413600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.413618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.418802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.418856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.418874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.423406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.423482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.423499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.427832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.427883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.427900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.432042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.432121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.432138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.436244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.436340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.436356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.440830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.440896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.440913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.445256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.445318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.445335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.449568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.449635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.449652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.454208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.454263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.454280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.459021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.459151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.459168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.463883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.464023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.464040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.469143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.556 [2024-12-06 11:28:36.469248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.556 [2024-12-06 11:28:36.469265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.556 [2024-12-06 11:28:36.474597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.557 [2024-12-06 11:28:36.474724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.557 [2024-12-06 11:28:36.474741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.557 [2024-12-06 11:28:36.479770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.557 [2024-12-06 11:28:36.479840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.557 [2024-12-06 11:28:36.479857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.557 [2024-12-06 11:28:36.484764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.557 [2024-12-06 11:28:36.484837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.557 [2024-12-06 11:28:36.484858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.557 [2024-12-06 11:28:36.489834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.557 [2024-12-06 11:28:36.489894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.557 [2024-12-06 11:28:36.489911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.494797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.494865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.494883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.499825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.499893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.499910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.505368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.505530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.505548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.512907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.513044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.513069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.519279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.519350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.519368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.524903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.524993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.525010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.529645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.529739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.529757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.534741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.534801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.534818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.539255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.539326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.539343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.543473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.543544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.543561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.547717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.547765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.547782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.551902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.551999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.552016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.556163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.556253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.556270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.560445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.560494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.560511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.564683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.564736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.564753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.568937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.568998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.569016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.573230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.573282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.573300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.577461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.577516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.577534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.581628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.581690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.581707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.585845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.585906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.585923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.590038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.590119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.590136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.594210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.594304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.594322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.598942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.599077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.599095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.603516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.603673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.603691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.608311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.608413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.608433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.613192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.613355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.613372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.617920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.617997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.618015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.622894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.623043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.623066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.628188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.628316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.628333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.633145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.633241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.633258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.638119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.638249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.638266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.816 [2024-12-06 11:28:36.643328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.816 [2024-12-06 11:28:36.643435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.816 [2024-12-06 11:28:36.643452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.647885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.647991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.648008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.652854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.653035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.653055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.659075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.659242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.659259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.664419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.664497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.664514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.669772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.669850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.669868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.674947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.675037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.675054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.679954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.680029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.680047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.685219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.685331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.685348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.690149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.690251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.690268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.695108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.695194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.695212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.700039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.700169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.700186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.705057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.705169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.705186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.709907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.710011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.710028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.715224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.715315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.715334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.720255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.720394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.720411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.725203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.725313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.725331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.730243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.730322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.730340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.735005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.735110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.735127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.739924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.740009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.740026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.744763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.744858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.744875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.817 [2024-12-06 11:28:36.749881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:03.817 [2024-12-06 11:28:36.749982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.817 [2024-12-06 11:28:36.750000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.076 [2024-12-06 11:28:36.754532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.076 [2024-12-06 11:28:36.754628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.076 [2024-12-06 11:28:36.754645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.759313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.759414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.759431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.764295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.764381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.764399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.769107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.769187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.769204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.774031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.774131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.774149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.779117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.779219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.779237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.784068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.784157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.784178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.789343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.789453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.789472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.793589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.793639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.793657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.797782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.797844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.797861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.801984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.802036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.802054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.806230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.806288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.806306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.810481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.810532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.810550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.814730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.814781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.814799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.819019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.819078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.819096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.823259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.823328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.823346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.827547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.827608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.827626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.831785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.831857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.831874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.836362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.836429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.836447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.841352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.841406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.841424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.846302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.846361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.846378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.851123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.851191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.851208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.856554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.856641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.856658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.861755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.861813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.861829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.866434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.866497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.866515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.870997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.871053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.871076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.875879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.875930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.875948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.881147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.881241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.881259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.886211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.886263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.886281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.891535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.891659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.891677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.896882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.896970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.896988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.901519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.901579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.901597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.906127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.906185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.906205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.910359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.910424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.910442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.914598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.914667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.914684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.919090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.919144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.919161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.923621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.923703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.923720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.928076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.928128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.928145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.932573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.932630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.932647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.937162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.937232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.937249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.941562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.941655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.941673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.945789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.945849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.077 [2024-12-06 11:28:36.945866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.077 [2024-12-06 11:28:36.950402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.077 [2024-12-06 11:28:36.950509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.950525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:36.955066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:36.955117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.955134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:36.959214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:36.959267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.959284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:36.963544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:36.963634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.963652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:36.968244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:36.968306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.968324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:36.972576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:36.972682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.972700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:36.977947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:36.978131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.978148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:36.983730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:36.983829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.983846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:36.989481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:36.989654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.989671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:36.995448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:36.995627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:36.995644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:37.001603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:37.001741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:37.001758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.078 [2024-12-06 11:28:37.007682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.078 [2024-12-06 11:28:37.007847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.078 [2024-12-06 11:28:37.007864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.336 [2024-12-06 11:28:37.014009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.336 [2024-12-06 11:28:37.014146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.336 [2024-12-06 11:28:37.014164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.336 [2024-12-06 11:28:37.020392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.336 [2024-12-06 11:28:37.020566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.336 [2024-12-06 11:28:37.020583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.336 [2024-12-06 11:28:37.026913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.336 [2024-12-06 11:28:37.027080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.336 [2024-12-06 11:28:37.027097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.336 [2024-12-06 11:28:37.033404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.336 [2024-12-06 11:28:37.033583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.336 [2024-12-06 11:28:37.033601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.336 [2024-12-06 11:28:37.039833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.336 [2024-12-06 11:28:37.040012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.336 [2024-12-06 11:28:37.040036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.336 [2024-12-06 11:28:37.046171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.336 [2024-12-06 11:28:37.046332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.337 [2024-12-06 11:28:37.046349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.337 6490.00 IOPS, 811.25 MiB/s [2024-12-06T10:28:37.275Z] [2024-12-06 11:28:37.053017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x991c30) with pdu=0x200016eff3c8 00:27:04.337 [2024-12-06 11:28:37.053200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.337 [2024-12-06 11:28:37.053217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.337 00:27:04.337 Latency(us) 00:27:04.337 [2024-12-06T10:28:37.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.337 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:04.337 nvme0n1 : 2.00 6485.18 810.65 0.00 0.00 2462.47 1824.58 11796.48 00:27:04.337 [2024-12-06T10:28:37.275Z] =================================================================================================================== 00:27:04.337 [2024-12-06T10:28:37.275Z] Total : 6485.18 810.65 0.00 0.00 2462.47 1824.58 11796.48 00:27:04.337 { 00:27:04.337 "results": [ 00:27:04.337 { 00:27:04.337 "job": "nvme0n1", 00:27:04.337 "core_mask": "0x2", 00:27:04.337 "workload": "randwrite", 00:27:04.337 "status": "finished", 00:27:04.337 "queue_depth": 16, 00:27:04.337 "io_size": 131072, 00:27:04.337 "runtime": 2.00457, 00:27:04.337 "iops": 6485.181360591049, 00:27:04.337 "mibps": 810.6476700738812, 00:27:04.337 "io_failed": 0, 00:27:04.337 "io_timeout": 0, 00:27:04.337 "avg_latency_us": 2462.469084195804, 00:27:04.337 "min_latency_us": 1824.581818181818, 00:27:04.337 "max_latency_us": 11796.48 00:27:04.337 } 00:27:04.337 ], 00:27:04.337 "core_count": 1 00:27:04.337 } 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:04.337 | .driver_specific 00:27:04.337 | .nvme_error 00:27:04.337 | .status_code 00:27:04.337 | .command_transient_transport_error' 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 420 > 0 )) 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1876988 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1876988 ']' 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1876988 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.337 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1876988 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1876988' 00:27:04.595 killing process with pid 1876988 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1876988 00:27:04.595 Received shutdown signal, test time was about 2.000000 seconds 00:27:04.595 00:27:04.595 Latency(us) 00:27:04.595 [2024-12-06T10:28:37.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.595 [2024-12-06T10:28:37.533Z] =================================================================================================================== 00:27:04.595 [2024-12-06T10:28:37.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1876988 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1875107 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1875107 ']' 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1875107 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1875107 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1875107' 00:27:04.595 killing process with pid 1875107 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1875107 00:27:04.595 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1875107 00:27:04.854 00:27:04.854 real 0m14.473s 00:27:04.854 user 0m27.526s 00:27:04.854 sys 0m4.671s 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.854 ************************************ 00:27:04.854 END TEST nvmf_digest_error 00:27:04.854 ************************************ 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:04.854 rmmod nvme_tcp 00:27:04.854 rmmod nvme_fabrics 00:27:04.854 rmmod nvme_keyring 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1875107 ']' 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1875107 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1875107 ']' 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1875107 00:27:04.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1875107) - No such process 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1875107 is not found' 00:27:04.854 Process with pid 1875107 is not found 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.854 11:28:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.393 00:27:07.393 real 0m36.456s 00:27:07.393 user 0m54.976s 00:27:07.393 sys 0m13.899s 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:07.393 ************************************ 00:27:07.393 END TEST nvmf_digest 00:27:07.393 ************************************ 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.393 ************************************ 00:27:07.393 START TEST nvmf_bdevperf 00:27:07.393 ************************************ 00:27:07.393 11:28:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:07.393 * Looking for test storage... 00:27:07.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:07.393 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:07.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.394 --rc genhtml_branch_coverage=1 00:27:07.394 --rc genhtml_function_coverage=1 00:27:07.394 --rc genhtml_legend=1 00:27:07.394 --rc geninfo_all_blocks=1 00:27:07.394 --rc geninfo_unexecuted_blocks=1 00:27:07.394 00:27:07.394 ' 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:07.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.394 --rc genhtml_branch_coverage=1 00:27:07.394 --rc genhtml_function_coverage=1 00:27:07.394 --rc genhtml_legend=1 00:27:07.394 --rc geninfo_all_blocks=1 00:27:07.394 --rc geninfo_unexecuted_blocks=1 00:27:07.394 00:27:07.394 ' 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:07.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.394 --rc genhtml_branch_coverage=1 00:27:07.394 --rc genhtml_function_coverage=1 00:27:07.394 --rc genhtml_legend=1 00:27:07.394 --rc geninfo_all_blocks=1 00:27:07.394 --rc geninfo_unexecuted_blocks=1 00:27:07.394 00:27:07.394 ' 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:07.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.394 --rc genhtml_branch_coverage=1 00:27:07.394 --rc genhtml_function_coverage=1 00:27:07.394 --rc genhtml_legend=1 00:27:07.394 --rc geninfo_all_blocks=1 00:27:07.394 --rc geninfo_unexecuted_blocks=1 00:27:07.394 00:27:07.394 ' 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:07.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:07.394 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.395 11:28:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:14.121 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:14.121 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:14.121 Found net devices under 0000:af:00.0: cvl_0_0 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:14.121 Found net devices under 0000:af:00.1: cvl_0_1 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:14.121 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:14.122 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:14.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:27:14.122 00:27:14.122 --- 10.0.0.2 ping statistics --- 00:27:14.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.122 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:27:14.122 00:27:14.122 --- 10.0.0.1 ping statistics --- 00:27:14.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.122 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1881330 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1881330 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1881330 ']' 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.122 [2024-12-06 11:28:46.173813] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:27:14.122 [2024-12-06 11:28:46.173863] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.122 [2024-12-06 11:28:46.249634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:14.122 [2024-12-06 11:28:46.289529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.122 [2024-12-06 11:28:46.289566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.122 [2024-12-06 11:28:46.289573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.122 [2024-12-06 11:28:46.289579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.122 [2024-12-06 11:28:46.289583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.122 [2024-12-06 11:28:46.291018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.122 [2024-12-06 11:28:46.291133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.122 [2024-12-06 11:28:46.291134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.122 [2024-12-06 11:28:46.429843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.122 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.123 Malloc0 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.123 [2024-12-06 11:28:46.496944] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:14.123 { 00:27:14.123 "params": { 00:27:14.123 "name": "Nvme$subsystem", 00:27:14.123 "trtype": "$TEST_TRANSPORT", 00:27:14.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.123 "adrfam": "ipv4", 00:27:14.123 "trsvcid": "$NVMF_PORT", 00:27:14.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.123 "hdgst": ${hdgst:-false}, 00:27:14.123 "ddgst": ${ddgst:-false} 00:27:14.123 }, 00:27:14.123 "method": "bdev_nvme_attach_controller" 00:27:14.123 } 00:27:14.123 EOF 00:27:14.123 )") 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:14.123 11:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:14.123 "params": { 00:27:14.123 "name": "Nvme1", 00:27:14.123 "trtype": "tcp", 00:27:14.123 "traddr": "10.0.0.2", 00:27:14.123 "adrfam": "ipv4", 00:27:14.123 "trsvcid": "4420", 00:27:14.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:14.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:14.123 "hdgst": false, 00:27:14.123 "ddgst": false 00:27:14.123 }, 00:27:14.123 "method": "bdev_nvme_attach_controller" 00:27:14.123 }' 00:27:14.123 [2024-12-06 11:28:46.547230] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:27:14.123 [2024-12-06 11:28:46.547271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881545 ] 00:27:14.123 [2024-12-06 11:28:46.620492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.123 [2024-12-06 11:28:46.658650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.123 Running I/O for 1 seconds... 00:27:15.057 12302.00 IOPS, 48.05 MiB/s 00:27:15.057 Latency(us) 00:27:15.057 [2024-12-06T10:28:47.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.057 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:15.057 Verification LBA range: start 0x0 length 0x4000 00:27:15.057 Nvme1n1 : 1.00 12382.20 48.37 0.00 0.00 10300.50 904.84 12988.04 00:27:15.057 [2024-12-06T10:28:47.995Z] =================================================================================================================== 00:27:15.057 [2024-12-06T10:28:47.995Z] Total : 12382.20 48.37 0.00 0.00 10300.50 904.84 12988.04 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1881806 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.316 { 00:27:15.316 "params": { 00:27:15.316 "name": "Nvme$subsystem", 00:27:15.316 "trtype": "$TEST_TRANSPORT", 00:27:15.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.316 "adrfam": "ipv4", 00:27:15.316 "trsvcid": "$NVMF_PORT", 00:27:15.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.316 "hdgst": ${hdgst:-false}, 00:27:15.316 "ddgst": ${ddgst:-false} 00:27:15.316 }, 00:27:15.316 "method": "bdev_nvme_attach_controller" 00:27:15.316 } 00:27:15.316 EOF 00:27:15.316 )") 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:15.316 11:28:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:15.316 "params": { 00:27:15.316 "name": "Nvme1", 00:27:15.316 "trtype": "tcp", 00:27:15.316 "traddr": "10.0.0.2", 00:27:15.316 "adrfam": "ipv4", 00:27:15.316 "trsvcid": "4420", 00:27:15.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:15.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:15.316 "hdgst": false, 00:27:15.316 "ddgst": false 00:27:15.316 }, 00:27:15.316 "method": "bdev_nvme_attach_controller" 00:27:15.316 }' 00:27:15.316 [2024-12-06 11:28:48.186214] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:27:15.316 [2024-12-06 11:28:48.186256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881806 ] 00:27:15.575 [2024-12-06 11:28:48.256965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.575 [2024-12-06 11:28:48.292219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.834 Running I/O for 15 seconds... 00:27:17.707 12305.00 IOPS, 48.07 MiB/s [2024-12-06T10:28:51.219Z] 12320.00 IOPS, 48.12 MiB/s [2024-12-06T10:28:51.219Z] 11:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1881330 00:27:18.281 11:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:18.281 [2024-12-06 11:28:51.154416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-12-06 11:28:51.154915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-12-06 11:28:51.154928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-12-06 11:28:51.154942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-12-06 11:28:51.154956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.281 [2024-12-06 11:28:51.154969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.281 [2024-12-06 11:28:51.154977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.154983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.154991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.154997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.282 [2024-12-06 11:28:51.155291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.282 [2024-12-06 11:28:51.155596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.282 [2024-12-06 11:28:51.155603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.155992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.155999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.283 [2024-12-06 11:28:51.156244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.283 [2024-12-06 11:28:51.156251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.284 [2024-12-06 11:28:51.156380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.156386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x910f00 is same with the state(6) to be set 00:27:18.284 [2024-12-06 11:28:51.156394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:18.284 [2024-12-06 11:28:51.156399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:18.284 [2024-12-06 11:28:51.156404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122672 len:8 PRP1 0x0 PRP2 0x0 00:27:18.284 [2024-12-06 11:28:51.156412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.284 [2024-12-06 11:28:51.159040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.284 [2024-12-06 11:28:51.159098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.284 [2024-12-06 11:28:51.159676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.284 [2024-12-06 11:28:51.159691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.284 [2024-12-06 11:28:51.159699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.284 [2024-12-06 11:28:51.159860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.284 [2024-12-06 11:28:51.160020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.284 [2024-12-06 11:28:51.160028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.284 [2024-12-06 11:28:51.160036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.284 [2024-12-06 11:28:51.160043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.284 [2024-12-06 11:28:51.172001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.284 [2024-12-06 11:28:51.172456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.284 [2024-12-06 11:28:51.172495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.284 [2024-12-06 11:28:51.172521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.284 [2024-12-06 11:28:51.173106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.284 [2024-12-06 11:28:51.173268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.284 [2024-12-06 11:28:51.173287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.284 [2024-12-06 11:28:51.173294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.284 [2024-12-06 11:28:51.173302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.284 [2024-12-06 11:28:51.184630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.284 [2024-12-06 11:28:51.185040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.284 [2024-12-06 11:28:51.185094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.284 [2024-12-06 11:28:51.185120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.284 [2024-12-06 11:28:51.185704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.284 [2024-12-06 11:28:51.186008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.284 [2024-12-06 11:28:51.186017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.284 [2024-12-06 11:28:51.186023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.284 [2024-12-06 11:28:51.186028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.284 [2024-12-06 11:28:51.197250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.284 [2024-12-06 11:28:51.197655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.284 [2024-12-06 11:28:51.197702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.284 [2024-12-06 11:28:51.197726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.284 [2024-12-06 11:28:51.198208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.284 [2024-12-06 11:28:51.198366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.284 [2024-12-06 11:28:51.198375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.284 [2024-12-06 11:28:51.198381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.284 [2024-12-06 11:28:51.198387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.284 [2024-12-06 11:28:51.209983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.284 [2024-12-06 11:28:51.210395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.284 [2024-12-06 11:28:51.210412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.284 [2024-12-06 11:28:51.210419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.284 [2024-12-06 11:28:51.210579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.545 [2024-12-06 11:28:51.210739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.545 [2024-12-06 11:28:51.210748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.545 [2024-12-06 11:28:51.210755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.545 [2024-12-06 11:28:51.210761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.545 [2024-12-06 11:28:51.222666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.545 [2024-12-06 11:28:51.223091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.545 [2024-12-06 11:28:51.223108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.545 [2024-12-06 11:28:51.223118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.545 [2024-12-06 11:28:51.223274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.545 [2024-12-06 11:28:51.223430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.545 [2024-12-06 11:28:51.223439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.545 [2024-12-06 11:28:51.223446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.545 [2024-12-06 11:28:51.223452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.545 [2024-12-06 11:28:51.235302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.545 [2024-12-06 11:28:51.235722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.545 [2024-12-06 11:28:51.235739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.545 [2024-12-06 11:28:51.235746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.545 [2024-12-06 11:28:51.235901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.545 [2024-12-06 11:28:51.236065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.545 [2024-12-06 11:28:51.236075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.545 [2024-12-06 11:28:51.236081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.545 [2024-12-06 11:28:51.236088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.545 [2024-12-06 11:28:51.248034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.545 [2024-12-06 11:28:51.248434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.545 [2024-12-06 11:28:51.248450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.545 [2024-12-06 11:28:51.248458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.545 [2024-12-06 11:28:51.248613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.545 [2024-12-06 11:28:51.248770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.545 [2024-12-06 11:28:51.248778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.545 [2024-12-06 11:28:51.248784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.545 [2024-12-06 11:28:51.248790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.545 [2024-12-06 11:28:51.260666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.545 [2024-12-06 11:28:51.261084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.545 [2024-12-06 11:28:51.261115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.545 [2024-12-06 11:28:51.261140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.545 [2024-12-06 11:28:51.261723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.545 [2024-12-06 11:28:51.262310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.545 [2024-12-06 11:28:51.262319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.545 [2024-12-06 11:28:51.262325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.545 [2024-12-06 11:28:51.262332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.545 [2024-12-06 11:28:51.273222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.545 [2024-12-06 11:28:51.273592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.545 [2024-12-06 11:28:51.273609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.545 [2024-12-06 11:28:51.273616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.545 [2024-12-06 11:28:51.273764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.545 [2024-12-06 11:28:51.273912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.545 [2024-12-06 11:28:51.273921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.545 [2024-12-06 11:28:51.273927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.545 [2024-12-06 11:28:51.273933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.545 [2024-12-06 11:28:51.285813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.545 [2024-12-06 11:28:51.286134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.545 [2024-12-06 11:28:51.286179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.545 [2024-12-06 11:28:51.286203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.545 [2024-12-06 11:28:51.286786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.545 [2024-12-06 11:28:51.287373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.546 [2024-12-06 11:28:51.287382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.546 [2024-12-06 11:28:51.287388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.546 [2024-12-06 11:28:51.287395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.546 [2024-12-06 11:28:51.298362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.546 [2024-12-06 11:28:51.298773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.546 [2024-12-06 11:28:51.298789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.546 [2024-12-06 11:28:51.298796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.546 [2024-12-06 11:28:51.298945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.546 [2024-12-06 11:28:51.299098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.546 [2024-12-06 11:28:51.299107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.546 [2024-12-06 11:28:51.299132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.546 [2024-12-06 11:28:51.299139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.546 [2024-12-06 11:28:51.311000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.546 [2024-12-06 11:28:51.311306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.546 [2024-12-06 11:28:51.311323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.546 [2024-12-06 11:28:51.311329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.546 [2024-12-06 11:28:51.311477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.546 [2024-12-06 11:28:51.311626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.546 [2024-12-06 11:28:51.311635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.546 [2024-12-06 11:28:51.311641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.546 [2024-12-06 11:28:51.311647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.546 [2024-12-06 11:28:51.323658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.546 [2024-12-06 11:28:51.324080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.546 [2024-12-06 11:28:51.324097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.546 [2024-12-06 11:28:51.324105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.546 [2024-12-06 11:28:51.324252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.546 [2024-12-06 11:28:51.324401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.546 [2024-12-06 11:28:51.324409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.546 [2024-12-06 11:28:51.324415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.546 [2024-12-06 11:28:51.324421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.546 [2024-12-06 11:28:51.336286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.546 [2024-12-06 11:28:51.336695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.546 [2024-12-06 11:28:51.336712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.546 [2024-12-06 11:28:51.336719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.546 [2024-12-06 11:28:51.336867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.546 [2024-12-06 11:28:51.337017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.546 [2024-12-06 11:28:51.337025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.546 [2024-12-06 11:28:51.337031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.546 [2024-12-06 11:28:51.337036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.546 [2024-12-06 11:28:51.348903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.546 [2024-12-06 11:28:51.349317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.546 [2024-12-06 11:28:51.349334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.546 [2024-12-06 11:28:51.349341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.546 [2024-12-06 11:28:51.349488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.546 [2024-12-06 11:28:51.349636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.546 [2024-12-06 11:28:51.349645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.546 [2024-12-06 11:28:51.349651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.546 [2024-12-06 11:28:51.349657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.546 [2024-12-06 11:28:51.361483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.546 [2024-12-06 11:28:51.361901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.546 [2024-12-06 11:28:51.361917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.546 [2024-12-06 11:28:51.361924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.546 [2024-12-06 11:28:51.362078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.546 [2024-12-06 11:28:51.362251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.546 [2024-12-06 11:28:51.362260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.546 [2024-12-06 11:28:51.362266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.546 [2024-12-06 11:28:51.362272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.546 [2024-12-06 11:28:51.374031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.546 [2024-12-06 11:28:51.374448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.546 [2024-12-06 11:28:51.374464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.546 [2024-12-06 11:28:51.374471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.546 [2024-12-06 11:28:51.374620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.546 [2024-12-06 11:28:51.374770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.546 [2024-12-06 11:28:51.374779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.546 [2024-12-06 11:28:51.374785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.546 [2024-12-06 11:28:51.374791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.546 [2024-12-06 11:28:51.386653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.546 [2024-12-06 11:28:51.387041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.546 [2024-12-06 11:28:51.387062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.546 [2024-12-06 11:28:51.387072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.546 [2024-12-06 11:28:51.387242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.546 [2024-12-06 11:28:51.387398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.546 [2024-12-06 11:28:51.387407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.546 [2024-12-06 11:28:51.387413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.546 [2024-12-06 11:28:51.387419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.546 [2024-12-06 11:28:51.399296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.546 [2024-12-06 11:28:51.399713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.546 [2024-12-06 11:28:51.399729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.546 [2024-12-06 11:28:51.399736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.546 [2024-12-06 11:28:51.399885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.547 [2024-12-06 11:28:51.400033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.547 [2024-12-06 11:28:51.400042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.547 [2024-12-06 11:28:51.400048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.547 [2024-12-06 11:28:51.400054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.547 [2024-12-06 11:28:51.411876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.547 [2024-12-06 11:28:51.412176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.547 [2024-12-06 11:28:51.412193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.547 [2024-12-06 11:28:51.412200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.547 [2024-12-06 11:28:51.412369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.547 [2024-12-06 11:28:51.412524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.547 [2024-12-06 11:28:51.412533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.547 [2024-12-06 11:28:51.412540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.547 [2024-12-06 11:28:51.412546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.547 [2024-12-06 11:28:51.424686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.547 [2024-12-06 11:28:51.425088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.547 [2024-12-06 11:28:51.425104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.547 [2024-12-06 11:28:51.425111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.547 [2024-12-06 11:28:51.425272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.547 [2024-12-06 11:28:51.425436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.547 [2024-12-06 11:28:51.425445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.547 [2024-12-06 11:28:51.425451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.547 [2024-12-06 11:28:51.425457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.547 [2024-12-06 11:28:51.437510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.547 [2024-12-06 11:28:51.437869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.547 [2024-12-06 11:28:51.437913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.547 [2024-12-06 11:28:51.437936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.547 [2024-12-06 11:28:51.438535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.547 [2024-12-06 11:28:51.438992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.547 [2024-12-06 11:28:51.439000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.547 [2024-12-06 11:28:51.439006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.547 [2024-12-06 11:28:51.439012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.547 [2024-12-06 11:28:51.450198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.547 [2024-12-06 11:28:51.450620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.547 [2024-12-06 11:28:51.450637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.547 [2024-12-06 11:28:51.450644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.547 [2024-12-06 11:28:51.450799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.547 [2024-12-06 11:28:51.450956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.547 [2024-12-06 11:28:51.450965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.547 [2024-12-06 11:28:51.450971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.547 [2024-12-06 11:28:51.450977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.547 [2024-12-06 11:28:51.462789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.547 [2024-12-06 11:28:51.463143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.547 [2024-12-06 11:28:51.463159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.547 [2024-12-06 11:28:51.463166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.547 [2024-12-06 11:28:51.463327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.547 [2024-12-06 11:28:51.463477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.547 [2024-12-06 11:28:51.463485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.547 [2024-12-06 11:28:51.463495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.547 [2024-12-06 11:28:51.463501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.547 [2024-12-06 11:28:51.475429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.547 [2024-12-06 11:28:51.475790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.547 [2024-12-06 11:28:51.475808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.547 [2024-12-06 11:28:51.475816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.547 [2024-12-06 11:28:51.475976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.547 [2024-12-06 11:28:51.476142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.547 [2024-12-06 11:28:51.476154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.547 [2024-12-06 11:28:51.476161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.547 [2024-12-06 11:28:51.476168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.807 [2024-12-06 11:28:51.488229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.807 [2024-12-06 11:28:51.488644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.807 [2024-12-06 11:28:51.488689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.807 [2024-12-06 11:28:51.488712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.807 [2024-12-06 11:28:51.489145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.807 [2024-12-06 11:28:51.489295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.807 [2024-12-06 11:28:51.489304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.807 [2024-12-06 11:28:51.489310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.807 [2024-12-06 11:28:51.489316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.807 [2024-12-06 11:28:51.500953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.807 [2024-12-06 11:28:51.501289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.807 [2024-12-06 11:28:51.501305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.807 [2024-12-06 11:28:51.501311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.807 [2024-12-06 11:28:51.501458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.807 [2024-12-06 11:28:51.501607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.807 [2024-12-06 11:28:51.501616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.807 [2024-12-06 11:28:51.501622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.807 [2024-12-06 11:28:51.501628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.807 [2024-12-06 11:28:51.513517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.807 [2024-12-06 11:28:51.513949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.807 [2024-12-06 11:28:51.513993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.807 [2024-12-06 11:28:51.514016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.807 [2024-12-06 11:28:51.514618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.807 [2024-12-06 11:28:51.514805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.807 [2024-12-06 11:28:51.514813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.807 [2024-12-06 11:28:51.514819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.807 [2024-12-06 11:28:51.514825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.807 [2024-12-06 11:28:51.526179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.807 [2024-12-06 11:28:51.526579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.807 [2024-12-06 11:28:51.526623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.807 [2024-12-06 11:28:51.526646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.807 [2024-12-06 11:28:51.527049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.807 [2024-12-06 11:28:51.527206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.807 [2024-12-06 11:28:51.527215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.807 [2024-12-06 11:28:51.527220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.807 [2024-12-06 11:28:51.527226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.808 [2024-12-06 11:28:51.538734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.808 [2024-12-06 11:28:51.539158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.808 [2024-12-06 11:28:51.539204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.808 [2024-12-06 11:28:51.539228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.808 [2024-12-06 11:28:51.539810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.808 [2024-12-06 11:28:51.539988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.808 [2024-12-06 11:28:51.539996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.808 [2024-12-06 11:28:51.540002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.808 [2024-12-06 11:28:51.540008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.808 [2024-12-06 11:28:51.551274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.808 [2024-12-06 11:28:51.551686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.808 [2024-12-06 11:28:51.551730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.808 [2024-12-06 11:28:51.551763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.808 [2024-12-06 11:28:51.552365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.808 [2024-12-06 11:28:51.552968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.808 [2024-12-06 11:28:51.552977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.808 [2024-12-06 11:28:51.552983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.808 [2024-12-06 11:28:51.552989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.808 [2024-12-06 11:28:51.563832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.808 [2024-12-06 11:28:51.564236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.808 [2024-12-06 11:28:51.564280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.808 [2024-12-06 11:28:51.564304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.808 [2024-12-06 11:28:51.564876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.808 [2024-12-06 11:28:51.565030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.808 [2024-12-06 11:28:51.565039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.808 [2024-12-06 11:28:51.565045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.808 [2024-12-06 11:28:51.565051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.808 [2024-12-06 11:28:51.576379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.808 [2024-12-06 11:28:51.576709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.808 [2024-12-06 11:28:51.576724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.808 [2024-12-06 11:28:51.576731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.808 [2024-12-06 11:28:51.576879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.808 [2024-12-06 11:28:51.577028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.808 [2024-12-06 11:28:51.577036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.808 [2024-12-06 11:28:51.577042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.808 [2024-12-06 11:28:51.577048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.808 10532.00 IOPS, 41.14 MiB/s [2024-12-06T10:28:51.746Z] [2024-12-06 11:28:51.589031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.808 [2024-12-06 11:28:51.589352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.808 [2024-12-06 11:28:51.589375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.808 [2024-12-06 11:28:51.589382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.808 [2024-12-06 11:28:51.589530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.808 [2024-12-06 11:28:51.589681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.808 [2024-12-06 11:28:51.589690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.808 [2024-12-06 11:28:51.589696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.808 [2024-12-06 11:28:51.589702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.808 [2024-12-06 11:28:51.601637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.808 [2024-12-06 11:28:51.602044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.808 [2024-12-06 11:28:51.602064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.808 [2024-12-06 11:28:51.602072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.808 [2024-12-06 11:28:51.602219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.808 [2024-12-06 11:28:51.602367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.808 [2024-12-06 11:28:51.602376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.808 [2024-12-06 11:28:51.602381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.808 [2024-12-06 11:28:51.602387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.808 [2024-12-06 11:28:51.614167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.808 [2024-12-06 11:28:51.614485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.808 [2024-12-06 11:28:51.614506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.808 [2024-12-06 11:28:51.614513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.808 [2024-12-06 11:28:51.614660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.808 [2024-12-06 11:28:51.614809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.808 [2024-12-06 11:28:51.614817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.808 [2024-12-06 11:28:51.614823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.808 [2024-12-06 11:28:51.614829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.808 [2024-12-06 11:28:51.626752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.808 [2024-12-06 11:28:51.627000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.808 [2024-12-06 11:28:51.627017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.808 [2024-12-06 11:28:51.627024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.808 [2024-12-06 11:28:51.627195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.808 [2024-12-06 11:28:51.627352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.808 [2024-12-06 11:28:51.627361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.808 [2024-12-06 11:28:51.627373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.808 [2024-12-06 11:28:51.627378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.808 [2024-12-06 11:28:51.639390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.808 [2024-12-06 11:28:51.639816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.808 [2024-12-06 11:28:51.639860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.808 [2024-12-06 11:28:51.639884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.808 [2024-12-06 11:28:51.640307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.808 [2024-12-06 11:28:51.640457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.809 [2024-12-06 11:28:51.640465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.809 [2024-12-06 11:28:51.640471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.809 [2024-12-06 11:28:51.640477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.809 [2024-12-06 11:28:51.652045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.809 [2024-12-06 11:28:51.652463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.809 [2024-12-06 11:28:51.652480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.809 [2024-12-06 11:28:51.652487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.809 [2024-12-06 11:28:51.652635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.809 [2024-12-06 11:28:51.652783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.809 [2024-12-06 11:28:51.652792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.809 [2024-12-06 11:28:51.652797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.809 [2024-12-06 11:28:51.652803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.809 [2024-12-06 11:28:51.664670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.809 [2024-12-06 11:28:51.665098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.809 [2024-12-06 11:28:51.665130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.809 [2024-12-06 11:28:51.665137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.809 [2024-12-06 11:28:51.665292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.809 [2024-12-06 11:28:51.665453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.809 [2024-12-06 11:28:51.665463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.809 [2024-12-06 11:28:51.665470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.809 [2024-12-06 11:28:51.665476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.809 [2024-12-06 11:28:51.677410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.809 [2024-12-06 11:28:51.677862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.809 [2024-12-06 11:28:51.677905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.809 [2024-12-06 11:28:51.677929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.809 [2024-12-06 11:28:51.678480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.809 [2024-12-06 11:28:51.678638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.809 [2024-12-06 11:28:51.678647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.809 [2024-12-06 11:28:51.678653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.809 [2024-12-06 11:28:51.678659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.809 [2024-12-06 11:28:51.689968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.809 [2024-12-06 11:28:51.690303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.809 [2024-12-06 11:28:51.690348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.809 [2024-12-06 11:28:51.690372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.809 [2024-12-06 11:28:51.690956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.809 [2024-12-06 11:28:51.691491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.809 [2024-12-06 11:28:51.691501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.809 [2024-12-06 11:28:51.691506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.809 [2024-12-06 11:28:51.691513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.809 [2024-12-06 11:28:51.702509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.809 [2024-12-06 11:28:51.702925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.809 [2024-12-06 11:28:51.702941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.809 [2024-12-06 11:28:51.702948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.809 [2024-12-06 11:28:51.703101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.809 [2024-12-06 11:28:51.703273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.809 [2024-12-06 11:28:51.703282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.809 [2024-12-06 11:28:51.703288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.809 [2024-12-06 11:28:51.703294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.809 [2024-12-06 11:28:51.715177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.809 [2024-12-06 11:28:51.715557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.809 [2024-12-06 11:28:51.715574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.809 [2024-12-06 11:28:51.715583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.809 [2024-12-06 11:28:51.715731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.809 [2024-12-06 11:28:51.715879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.809 [2024-12-06 11:28:51.715887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.809 [2024-12-06 11:28:51.715893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.809 [2024-12-06 11:28:51.715899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.809 [2024-12-06 11:28:51.727774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:18.809 [2024-12-06 11:28:51.728196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.809 [2024-12-06 11:28:51.728241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:18.809 [2024-12-06 11:28:51.728264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:18.809 [2024-12-06 11:28:51.728847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:18.809 [2024-12-06 11:28:51.729279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:18.809 [2024-12-06 11:28:51.729298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:18.809 [2024-12-06 11:28:51.729313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:18.809 [2024-12-06 11:28:51.729327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:18.809 [2024-12-06 11:28:51.742627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.085 [2024-12-06 11:28:51.743157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.085 [2024-12-06 11:28:51.743179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.085 [2024-12-06 11:28:51.743190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.085 [2024-12-06 11:28:51.743446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.085 [2024-12-06 11:28:51.743703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.085 [2024-12-06 11:28:51.743716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.085 [2024-12-06 11:28:51.743726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.085 [2024-12-06 11:28:51.743735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.085 [2024-12-06 11:28:51.755617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.085 [2024-12-06 11:28:51.756068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.085 [2024-12-06 11:28:51.756087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.085 [2024-12-06 11:28:51.756095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.085 [2024-12-06 11:28:51.756268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.085 [2024-12-06 11:28:51.756445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.085 [2024-12-06 11:28:51.756455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.086 [2024-12-06 11:28:51.756462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.086 [2024-12-06 11:28:51.756469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.086 [2024-12-06 11:28:51.768197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.086 [2024-12-06 11:28:51.768606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.086 [2024-12-06 11:28:51.768652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.086 [2024-12-06 11:28:51.768676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.086 [2024-12-06 11:28:51.769273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.086 [2024-12-06 11:28:51.769701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.086 [2024-12-06 11:28:51.769718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.086 [2024-12-06 11:28:51.769732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.086 [2024-12-06 11:28:51.769746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.086 [2024-12-06 11:28:51.783235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.086 [2024-12-06 11:28:51.783763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.086 [2024-12-06 11:28:51.783785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.086 [2024-12-06 11:28:51.783796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.086 [2024-12-06 11:28:51.784051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.086 [2024-12-06 11:28:51.784317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.086 [2024-12-06 11:28:51.784330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.086 [2024-12-06 11:28:51.784341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.086 [2024-12-06 11:28:51.784350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.086 [2024-12-06 11:28:51.796159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.086 [2024-12-06 11:28:51.796523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.086 [2024-12-06 11:28:51.796568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.086 [2024-12-06 11:28:51.796591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.086 [2024-12-06 11:28:51.797190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.086 [2024-12-06 11:28:51.797671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.086 [2024-12-06 11:28:51.797680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.086 [2024-12-06 11:28:51.797687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.086 [2024-12-06 11:28:51.797697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.086 [2024-12-06 11:28:51.808832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.086 [2024-12-06 11:28:51.809273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.086 [2024-12-06 11:28:51.809290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.086 [2024-12-06 11:28:51.809297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.086 [2024-12-06 11:28:51.809454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.086 [2024-12-06 11:28:51.809610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.086 [2024-12-06 11:28:51.809619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.086 [2024-12-06 11:28:51.809625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.086 [2024-12-06 11:28:51.809631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.086 [2024-12-06 11:28:51.821449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.086 [2024-12-06 11:28:51.821831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.086 [2024-12-06 11:28:51.821871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.086 [2024-12-06 11:28:51.821897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.086 [2024-12-06 11:28:51.822443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.086 [2024-12-06 11:28:51.822601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.086 [2024-12-06 11:28:51.822610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.086 [2024-12-06 11:28:51.822616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.086 [2024-12-06 11:28:51.822622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.086 [2024-12-06 11:28:51.834074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.086 [2024-12-06 11:28:51.834381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.086 [2024-12-06 11:28:51.834411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.086 [2024-12-06 11:28:51.834435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.086 [2024-12-06 11:28:51.835018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.086 [2024-12-06 11:28:51.835618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.086 [2024-12-06 11:28:51.835645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.086 [2024-12-06 11:28:51.835667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.086 [2024-12-06 11:28:51.835687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.086 [2024-12-06 11:28:51.846789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.086 [2024-12-06 11:28:51.847106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.086 [2024-12-06 11:28:51.847122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.086 [2024-12-06 11:28:51.847128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.086 [2024-12-06 11:28:51.847276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.086 [2024-12-06 11:28:51.847424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.086 [2024-12-06 11:28:51.847433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.086 [2024-12-06 11:28:51.847439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.086 [2024-12-06 11:28:51.847445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.086 [2024-12-06 11:28:51.859456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.086 [2024-12-06 11:28:51.859878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.086 [2024-12-06 11:28:51.859895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.086 [2024-12-06 11:28:51.859901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.086 [2024-12-06 11:28:51.860064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.086 [2024-12-06 11:28:51.860220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.086 [2024-12-06 11:28:51.860229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.086 [2024-12-06 11:28:51.860236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.086 [2024-12-06 11:28:51.860242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.086 [2024-12-06 11:28:51.872130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.086 [2024-12-06 11:28:51.872549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.086 [2024-12-06 11:28:51.872565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.086 [2024-12-06 11:28:51.872573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.087 [2024-12-06 11:28:51.872728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.087 [2024-12-06 11:28:51.872885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.087 [2024-12-06 11:28:51.872894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.087 [2024-12-06 11:28:51.872900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.087 [2024-12-06 11:28:51.872907] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.087 [2024-12-06 11:28:51.884822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.087 [2024-12-06 11:28:51.885244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.087 [2024-12-06 11:28:51.885284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.087 [2024-12-06 11:28:51.885317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.087 [2024-12-06 11:28:51.885897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.087 [2024-12-06 11:28:51.886046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.087 [2024-12-06 11:28:51.886053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.087 [2024-12-06 11:28:51.886062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.087 [2024-12-06 11:28:51.886068] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.087 [2024-12-06 11:28:51.897435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.087 [2024-12-06 11:28:51.897854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.087 [2024-12-06 11:28:51.897870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.087 [2024-12-06 11:28:51.897879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.087 [2024-12-06 11:28:51.898035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.087 [2024-12-06 11:28:51.898198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.087 [2024-12-06 11:28:51.898208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.087 [2024-12-06 11:28:51.898214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.087 [2024-12-06 11:28:51.898220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.087 [2024-12-06 11:28:51.910132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.087 [2024-12-06 11:28:51.910567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.087 [2024-12-06 11:28:51.910612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.087 [2024-12-06 11:28:51.910635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.087 [2024-12-06 11:28:51.911050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.087 [2024-12-06 11:28:51.911228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.087 [2024-12-06 11:28:51.911237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.087 [2024-12-06 11:28:51.911244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.087 [2024-12-06 11:28:51.911250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.087 [2024-12-06 11:28:51.922899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.087 [2024-12-06 11:28:51.923327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.087 [2024-12-06 11:28:51.923345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.087 [2024-12-06 11:28:51.923352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.087 [2024-12-06 11:28:51.923512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.087 [2024-12-06 11:28:51.923673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.087 [2024-12-06 11:28:51.923684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.087 [2024-12-06 11:28:51.923691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.087 [2024-12-06 11:28:51.923697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.087 [2024-12-06 11:28:51.935590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.087 [2024-12-06 11:28:51.936016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.087 [2024-12-06 11:28:51.936075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.087 [2024-12-06 11:28:51.936101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.087 [2024-12-06 11:28:51.936658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.087 [2024-12-06 11:28:51.936807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.087 [2024-12-06 11:28:51.936816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.087 [2024-12-06 11:28:51.936821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.087 [2024-12-06 11:28:51.936827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.087 [2024-12-06 11:28:51.948187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.087 [2024-12-06 11:28:51.948519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.087 [2024-12-06 11:28:51.948536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.087 [2024-12-06 11:28:51.948542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.087 [2024-12-06 11:28:51.948689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.087 [2024-12-06 11:28:51.948838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.087 [2024-12-06 11:28:51.948847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.087 [2024-12-06 11:28:51.948853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.087 [2024-12-06 11:28:51.948859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.087 [2024-12-06 11:28:51.960772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.087 [2024-12-06 11:28:51.961115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.087 [2024-12-06 11:28:51.961161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.087 [2024-12-06 11:28:51.961185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.087 [2024-12-06 11:28:51.961770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.087 [2024-12-06 11:28:51.962109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.087 [2024-12-06 11:28:51.962118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.087 [2024-12-06 11:28:51.962125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.087 [2024-12-06 11:28:51.962133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.087 [2024-12-06 11:28:51.973426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.087 [2024-12-06 11:28:51.973772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.087 [2024-12-06 11:28:51.973816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.087 [2024-12-06 11:28:51.973840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.087 [2024-12-06 11:28:51.974438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.087 [2024-12-06 11:28:51.975027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.087 [2024-12-06 11:28:51.975051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.087 [2024-12-06 11:28:51.975090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.087 [2024-12-06 11:28:51.975098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.088 [2024-12-06 11:28:51.986005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.088 [2024-12-06 11:28:51.986417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.088 [2024-12-06 11:28:51.986467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.088 [2024-12-06 11:28:51.986491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.088 [2024-12-06 11:28:51.987027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.088 [2024-12-06 11:28:51.987202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.088 [2024-12-06 11:28:51.987211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.088 [2024-12-06 11:28:51.987217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.088 [2024-12-06 11:28:51.987223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.088 [2024-12-06 11:28:51.998600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.088 [2024-12-06 11:28:51.998950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.088 [2024-12-06 11:28:51.998966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.088 [2024-12-06 11:28:51.998973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.088 [2024-12-06 11:28:51.999143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.088 [2024-12-06 11:28:51.999300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.088 [2024-12-06 11:28:51.999309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.088 [2024-12-06 11:28:51.999315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.088 [2024-12-06 11:28:51.999321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.088 [2024-12-06 11:28:52.011351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.088 [2024-12-06 11:28:52.011707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.088 [2024-12-06 11:28:52.011751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.088 [2024-12-06 11:28:52.011776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.088 [2024-12-06 11:28:52.012250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.088 [2024-12-06 11:28:52.012414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.088 [2024-12-06 11:28:52.012422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.088 [2024-12-06 11:28:52.012428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.088 [2024-12-06 11:28:52.012434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.348 [2024-12-06 11:28:52.024140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.348 [2024-12-06 11:28:52.024584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.348 [2024-12-06 11:28:52.024629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.348 [2024-12-06 11:28:52.024652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.348 [2024-12-06 11:28:52.025181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.348 [2024-12-06 11:28:52.025340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.348 [2024-12-06 11:28:52.025349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.348 [2024-12-06 11:28:52.025355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.348 [2024-12-06 11:28:52.025361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.348 [2024-12-06 11:28:52.036854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.348 [2024-12-06 11:28:52.037242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.348 [2024-12-06 11:28:52.037257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.348 [2024-12-06 11:28:52.037264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.348 [2024-12-06 11:28:52.037411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.348 [2024-12-06 11:28:52.037560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.348 [2024-12-06 11:28:52.037569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.348 [2024-12-06 11:28:52.037576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.348 [2024-12-06 11:28:52.037581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.348 [2024-12-06 11:28:52.049487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.348 [2024-12-06 11:28:52.049859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.348 [2024-12-06 11:28:52.049876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.348 [2024-12-06 11:28:52.049883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.348 [2024-12-06 11:28:52.050041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.348 [2024-12-06 11:28:52.050201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.348 [2024-12-06 11:28:52.050216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.348 [2024-12-06 11:28:52.050222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.348 [2024-12-06 11:28:52.050228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.348 [2024-12-06 11:28:52.062145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.349 [2024-12-06 11:28:52.062539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.349 [2024-12-06 11:28:52.062557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.349 [2024-12-06 11:28:52.062565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.349 [2024-12-06 11:28:52.062720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.349 [2024-12-06 11:28:52.062877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.349 [2024-12-06 11:28:52.062885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.349 [2024-12-06 11:28:52.062891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.349 [2024-12-06 11:28:52.062898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.349 [2024-12-06 11:28:52.074999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.349 [2024-12-06 11:28:52.075467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.349 [2024-12-06 11:28:52.075518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.349 [2024-12-06 11:28:52.075543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.349 [2024-12-06 11:28:52.076123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.349 [2024-12-06 11:28:52.076285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.349 [2024-12-06 11:28:52.076294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.349 [2024-12-06 11:28:52.076301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.349 [2024-12-06 11:28:52.076307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.349 [2024-12-06 11:28:52.087663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.349 [2024-12-06 11:28:52.088074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.349 [2024-12-06 11:28:52.088122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.349 [2024-12-06 11:28:52.088147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.349 [2024-12-06 11:28:52.088732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.349 [2024-12-06 11:28:52.089336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.349 [2024-12-06 11:28:52.089349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.349 [2024-12-06 11:28:52.089355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.349 [2024-12-06 11:28:52.089361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.349 [2024-12-06 11:28:52.100350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.349 [2024-12-06 11:28:52.100716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.349 [2024-12-06 11:28:52.100732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.349 [2024-12-06 11:28:52.100738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.349 [2024-12-06 11:28:52.100886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.349 [2024-12-06 11:28:52.101035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.349 [2024-12-06 11:28:52.101044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.349 [2024-12-06 11:28:52.101050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.349 [2024-12-06 11:28:52.101055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.349 [2024-12-06 11:28:52.113038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.349 [2024-12-06 11:28:52.113411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.349 [2024-12-06 11:28:52.113427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.349 [2024-12-06 11:28:52.113435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.349 [2024-12-06 11:28:52.113584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.349 [2024-12-06 11:28:52.113732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.349 [2024-12-06 11:28:52.113741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.349 [2024-12-06 11:28:52.113747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.349 [2024-12-06 11:28:52.113753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.349 [2024-12-06 11:28:52.125706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.349 [2024-12-06 11:28:52.126036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.349 [2024-12-06 11:28:52.126053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.349 [2024-12-06 11:28:52.126064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.349 [2024-12-06 11:28:52.126235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.349 [2024-12-06 11:28:52.126391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.349 [2024-12-06 11:28:52.126400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.349 [2024-12-06 11:28:52.126406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.349 [2024-12-06 11:28:52.126416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.349 [2024-12-06 11:28:52.138396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.349 [2024-12-06 11:28:52.138659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.349 [2024-12-06 11:28:52.138676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.349 [2024-12-06 11:28:52.138683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.349 [2024-12-06 11:28:52.138839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.349 [2024-12-06 11:28:52.138996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.349 [2024-12-06 11:28:52.139006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.349 [2024-12-06 11:28:52.139012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.349 [2024-12-06 11:28:52.139017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.349 [2024-12-06 11:28:52.151087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.349 [2024-12-06 11:28:52.151396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.349 [2024-12-06 11:28:52.151412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.349 [2024-12-06 11:28:52.151419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.349 [2024-12-06 11:28:52.151567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.349 [2024-12-06 11:28:52.151716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.349 [2024-12-06 11:28:52.151724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.349 [2024-12-06 11:28:52.151730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.349 [2024-12-06 11:28:52.151736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.349 [2024-12-06 11:28:52.163769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.349 [2024-12-06 11:28:52.164162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.349 [2024-12-06 11:28:52.164208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.349 [2024-12-06 11:28:52.164231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.349 [2024-12-06 11:28:52.164659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.349 [2024-12-06 11:28:52.164816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.349 [2024-12-06 11:28:52.164825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.350 [2024-12-06 11:28:52.164831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.350 [2024-12-06 11:28:52.164837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.350 [2024-12-06 11:28:52.176563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.350 [2024-12-06 11:28:52.176975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.350 [2024-12-06 11:28:52.176995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.350 [2024-12-06 11:28:52.177003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.350 [2024-12-06 11:28:52.177165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.350 [2024-12-06 11:28:52.177323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.350 [2024-12-06 11:28:52.177332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.350 [2024-12-06 11:28:52.177338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.350 [2024-12-06 11:28:52.177344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.350 [2024-12-06 11:28:52.189351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.350 [2024-12-06 11:28:52.189700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.350 [2024-12-06 11:28:52.189716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.350 [2024-12-06 11:28:52.189723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.350 [2024-12-06 11:28:52.189879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.350 [2024-12-06 11:28:52.190035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.350 [2024-12-06 11:28:52.190044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.350 [2024-12-06 11:28:52.190050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.350 [2024-12-06 11:28:52.190056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.350 [2024-12-06 11:28:52.201996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.350 [2024-12-06 11:28:52.202386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.350 [2024-12-06 11:28:52.202434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.350 [2024-12-06 11:28:52.202457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.350 [2024-12-06 11:28:52.203039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.350 [2024-12-06 11:28:52.203639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.350 [2024-12-06 11:28:52.203648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.350 [2024-12-06 11:28:52.203654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.350 [2024-12-06 11:28:52.203660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.350 [2024-12-06 11:28:52.214633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.350 [2024-12-06 11:28:52.215049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.350 [2024-12-06 11:28:52.215070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.350 [2024-12-06 11:28:52.215078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.350 [2024-12-06 11:28:52.215231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.350 [2024-12-06 11:28:52.215379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.350 [2024-12-06 11:28:52.215388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.350 [2024-12-06 11:28:52.215394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.350 [2024-12-06 11:28:52.215400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.350 [2024-12-06 11:28:52.227380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.350 [2024-12-06 11:28:52.227834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.350 [2024-12-06 11:28:52.227851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.350 [2024-12-06 11:28:52.227858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.350 [2024-12-06 11:28:52.228013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.350 [2024-12-06 11:28:52.228175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.350 [2024-12-06 11:28:52.228185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.350 [2024-12-06 11:28:52.228190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.350 [2024-12-06 11:28:52.228197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.350 [2024-12-06 11:28:52.240080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.350 [2024-12-06 11:28:52.240508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.350 [2024-12-06 11:28:52.240552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.350 [2024-12-06 11:28:52.240576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.350 [2024-12-06 11:28:52.241173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.350 [2024-12-06 11:28:52.241746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.350 [2024-12-06 11:28:52.241755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.350 [2024-12-06 11:28:52.241761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.350 [2024-12-06 11:28:52.241767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.350 [2024-12-06 11:28:52.252792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.350 [2024-12-06 11:28:52.253134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.350 [2024-12-06 11:28:52.253152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.350 [2024-12-06 11:28:52.253159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.350 [2024-12-06 11:28:52.253314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.350 [2024-12-06 11:28:52.253471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.350 [2024-12-06 11:28:52.253485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.350 [2024-12-06 11:28:52.253492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.350 [2024-12-06 11:28:52.253498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.350 [2024-12-06 11:28:52.265366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.350 [2024-12-06 11:28:52.265809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.350 [2024-12-06 11:28:52.265855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.350 [2024-12-06 11:28:52.265878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.350 [2024-12-06 11:28:52.266540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.350 [2024-12-06 11:28:52.266748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.350 [2024-12-06 11:28:52.266756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.350 [2024-12-06 11:28:52.266762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.350 [2024-12-06 11:28:52.266768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.350 [2024-12-06 11:28:52.278098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.350 [2024-12-06 11:28:52.278504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.350 [2024-12-06 11:28:52.278521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.351 [2024-12-06 11:28:52.278528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.351 [2024-12-06 11:28:52.278688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.351 [2024-12-06 11:28:52.278848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.351 [2024-12-06 11:28:52.278857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.351 [2024-12-06 11:28:52.278864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.351 [2024-12-06 11:28:52.278869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.612 [2024-12-06 11:28:52.291007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.612 [2024-12-06 11:28:52.291457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-12-06 11:28:52.291503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.612 [2024-12-06 11:28:52.291527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.612 [2024-12-06 11:28:52.292136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.612 [2024-12-06 11:28:52.292286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.612 [2024-12-06 11:28:52.292295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.612 [2024-12-06 11:28:52.292301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.612 [2024-12-06 11:28:52.292307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.612 [2024-12-06 11:28:52.303715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.612 [2024-12-06 11:28:52.304177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-12-06 11:28:52.304193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.612 [2024-12-06 11:28:52.304200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.612 [2024-12-06 11:28:52.304348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.612 [2024-12-06 11:28:52.304496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.612 [2024-12-06 11:28:52.304505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.612 [2024-12-06 11:28:52.304511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.612 [2024-12-06 11:28:52.304517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.612 [2024-12-06 11:28:52.316437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.612 [2024-12-06 11:28:52.316832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-12-06 11:28:52.316848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.612 [2024-12-06 11:28:52.316855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.612 [2024-12-06 11:28:52.317002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.612 [2024-12-06 11:28:52.317159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.612 [2024-12-06 11:28:52.317168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.612 [2024-12-06 11:28:52.317174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.612 [2024-12-06 11:28:52.317179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.612 [2024-12-06 11:28:52.329160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.612 [2024-12-06 11:28:52.329569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-12-06 11:28:52.329586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.612 [2024-12-06 11:28:52.329593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.612 [2024-12-06 11:28:52.329740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.612 [2024-12-06 11:28:52.329889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.612 [2024-12-06 11:28:52.329897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.612 [2024-12-06 11:28:52.329903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.612 [2024-12-06 11:28:52.329909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.612 [2024-12-06 11:28:52.341834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.612 [2024-12-06 11:28:52.342261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-12-06 11:28:52.342314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.613 [2024-12-06 11:28:52.342338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.613 [2024-12-06 11:28:52.342927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.613 [2024-12-06 11:28:52.343083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.613 [2024-12-06 11:28:52.343092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.613 [2024-12-06 11:28:52.343098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.613 [2024-12-06 11:28:52.343104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.613 [2024-12-06 11:28:52.354461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.613 [2024-12-06 11:28:52.354926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-12-06 11:28:52.354970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.613 [2024-12-06 11:28:52.354993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.613 [2024-12-06 11:28:52.355588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.613 [2024-12-06 11:28:52.356010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.613 [2024-12-06 11:28:52.356018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.613 [2024-12-06 11:28:52.356024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.613 [2024-12-06 11:28:52.356030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.613 [2024-12-06 11:28:52.367164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.613 [2024-12-06 11:28:52.367587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-12-06 11:28:52.367633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.613 [2024-12-06 11:28:52.367656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.613 [2024-12-06 11:28:52.368125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.613 [2024-12-06 11:28:52.368281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.613 [2024-12-06 11:28:52.368291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.613 [2024-12-06 11:28:52.368296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.613 [2024-12-06 11:28:52.368302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.613 [2024-12-06 11:28:52.379870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.613 [2024-12-06 11:28:52.380229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-12-06 11:28:52.380246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.613 [2024-12-06 11:28:52.380252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.613 [2024-12-06 11:28:52.380412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.613 [2024-12-06 11:28:52.380569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.613 [2024-12-06 11:28:52.380577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.613 [2024-12-06 11:28:52.380584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.613 [2024-12-06 11:28:52.380590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.613 [2024-12-06 11:28:52.392581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.613 [2024-12-06 11:28:52.393000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-12-06 11:28:52.393044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.613 [2024-12-06 11:28:52.393093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.613 [2024-12-06 11:28:52.393679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.613 [2024-12-06 11:28:52.394169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.613 [2024-12-06 11:28:52.394178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.613 [2024-12-06 11:28:52.394185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.613 [2024-12-06 11:28:52.394191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.613 [2024-12-06 11:28:52.405304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.613 [2024-12-06 11:28:52.405623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-12-06 11:28:52.405666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.613 [2024-12-06 11:28:52.405689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.613 [2024-12-06 11:28:52.406237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.613 [2024-12-06 11:28:52.406388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.613 [2024-12-06 11:28:52.406396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.613 [2024-12-06 11:28:52.406402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.613 [2024-12-06 11:28:52.406408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.613 [2024-12-06 11:28:52.417999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.613 [2024-12-06 11:28:52.418276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-12-06 11:28:52.418293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.613 [2024-12-06 11:28:52.418301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.613 [2024-12-06 11:28:52.418459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.613 [2024-12-06 11:28:52.418619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.613 [2024-12-06 11:28:52.418627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.613 [2024-12-06 11:28:52.418637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.613 [2024-12-06 11:28:52.418644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.613 [2024-12-06 11:28:52.430683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.614 [2024-12-06 11:28:52.432211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-06 11:28:52.432234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.614 [2024-12-06 11:28:52.432242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.614 [2024-12-06 11:28:52.432422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.614 [2024-12-06 11:28:52.432584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.614 [2024-12-06 11:28:52.432592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.614 [2024-12-06 11:28:52.432598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.614 [2024-12-06 11:28:52.432605] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.614 [2024-12-06 11:28:52.443516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.614 [2024-12-06 11:28:52.443909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-06 11:28:52.443926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.614 [2024-12-06 11:28:52.443933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.614 [2024-12-06 11:28:52.444105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.614 [2024-12-06 11:28:52.444262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.614 [2024-12-06 11:28:52.444271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.614 [2024-12-06 11:28:52.444277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.614 [2024-12-06 11:28:52.444283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.614 [2024-12-06 11:28:52.456144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.614 [2024-12-06 11:28:52.456460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-06 11:28:52.456477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.614 [2024-12-06 11:28:52.456485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.614 [2024-12-06 11:28:52.456640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.614 [2024-12-06 11:28:52.456797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.614 [2024-12-06 11:28:52.456806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.614 [2024-12-06 11:28:52.456812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.614 [2024-12-06 11:28:52.456818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.614 [2024-12-06 11:28:52.468751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.614 [2024-12-06 11:28:52.469089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-06 11:28:52.469106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.614 [2024-12-06 11:28:52.469114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.614 [2024-12-06 11:28:52.469269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.614 [2024-12-06 11:28:52.469430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.614 [2024-12-06 11:28:52.469439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.614 [2024-12-06 11:28:52.469445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.614 [2024-12-06 11:28:52.469451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.614 [2024-12-06 11:28:52.481385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.614 [2024-12-06 11:28:52.481642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-06 11:28:52.481658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.614 [2024-12-06 11:28:52.481665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.614 [2024-12-06 11:28:52.481813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.614 [2024-12-06 11:28:52.481962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.614 [2024-12-06 11:28:52.481971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.614 [2024-12-06 11:28:52.481977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.614 [2024-12-06 11:28:52.481984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.614 [2024-12-06 11:28:52.494137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.614 [2024-12-06 11:28:52.494394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-06 11:28:52.494410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.614 [2024-12-06 11:28:52.494418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.614 [2024-12-06 11:28:52.494565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.614 [2024-12-06 11:28:52.494714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.614 [2024-12-06 11:28:52.494722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.614 [2024-12-06 11:28:52.494728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.614 [2024-12-06 11:28:52.494734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.614 [2024-12-06 11:28:52.506844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.614 [2024-12-06 11:28:52.507184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-12-06 11:28:52.507204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.614 [2024-12-06 11:28:52.507211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.614 [2024-12-06 11:28:52.507359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.614 [2024-12-06 11:28:52.507508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.614 [2024-12-06 11:28:52.507516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.614 [2024-12-06 11:28:52.507522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.614 [2024-12-06 11:28:52.507528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.615 [2024-12-06 11:28:52.519504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.615 [2024-12-06 11:28:52.519856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-06 11:28:52.519873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.615 [2024-12-06 11:28:52.519879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.615 [2024-12-06 11:28:52.520027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.615 [2024-12-06 11:28:52.520180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.615 [2024-12-06 11:28:52.520189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.615 [2024-12-06 11:28:52.520195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.615 [2024-12-06 11:28:52.520201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.615 [2024-12-06 11:28:52.532186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.615 [2024-12-06 11:28:52.532527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-06 11:28:52.532543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.615 [2024-12-06 11:28:52.532550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.615 [2024-12-06 11:28:52.532698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.615 [2024-12-06 11:28:52.532847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.615 [2024-12-06 11:28:52.532855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.615 [2024-12-06 11:28:52.532861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.615 [2024-12-06 11:28:52.532867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.615 [2024-12-06 11:28:52.545044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.615 [2024-12-06 11:28:52.545394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-12-06 11:28:52.545410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.615 [2024-12-06 11:28:52.545417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.615 [2024-12-06 11:28:52.545577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.615 [2024-12-06 11:28:52.545740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.615 [2024-12-06 11:28:52.545750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.615 [2024-12-06 11:28:52.545756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.615 [2024-12-06 11:28:52.545762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.876 [2024-12-06 11:28:52.557709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.876 [2024-12-06 11:28:52.558125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.876 [2024-12-06 11:28:52.558142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.876 [2024-12-06 11:28:52.558149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.876 [2024-12-06 11:28:52.558305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.876 [2024-12-06 11:28:52.558461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.876 [2024-12-06 11:28:52.558470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.876 [2024-12-06 11:28:52.558476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.876 [2024-12-06 11:28:52.558482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.876 [2024-12-06 11:28:52.570311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.876 [2024-12-06 11:28:52.570720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.876 [2024-12-06 11:28:52.570766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.876 [2024-12-06 11:28:52.570790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.876 [2024-12-06 11:28:52.571389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.876 [2024-12-06 11:28:52.571777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.876 [2024-12-06 11:28:52.571785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.876 [2024-12-06 11:28:52.571792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.876 [2024-12-06 11:28:52.571798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.876 7899.00 IOPS, 30.86 MiB/s [2024-12-06T10:28:52.814Z] [2024-12-06 11:28:52.584083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.876 [2024-12-06 11:28:52.584491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.876 [2024-12-06 11:28:52.584507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.876 [2024-12-06 11:28:52.584541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.876 [2024-12-06 11:28:52.585142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.876 [2024-12-06 11:28:52.585374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.876 [2024-12-06 11:28:52.585383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.876 [2024-12-06 11:28:52.585392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.876 [2024-12-06 11:28:52.585398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.876 [2024-12-06 11:28:52.596631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.876 [2024-12-06 11:28:52.597027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.876 [2024-12-06 11:28:52.597084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.876 [2024-12-06 11:28:52.597109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.876 [2024-12-06 11:28:52.597642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.876 [2024-12-06 11:28:52.597790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.876 [2024-12-06 11:28:52.597797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.876 [2024-12-06 11:28:52.597803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.876 [2024-12-06 11:28:52.597809] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.877 [2024-12-06 11:28:52.609201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.877 [2024-12-06 11:28:52.609606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.877 [2024-12-06 11:28:52.609653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.877 [2024-12-06 11:28:52.609677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.877 [2024-12-06 11:28:52.610279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.877 [2024-12-06 11:28:52.610520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.877 [2024-12-06 11:28:52.610529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.877 [2024-12-06 11:28:52.610535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.877 [2024-12-06 11:28:52.610542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.877 [2024-12-06 11:28:52.621766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.877 [2024-12-06 11:28:52.622098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.877 [2024-12-06 11:28:52.622115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.877 [2024-12-06 11:28:52.622123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.877 [2024-12-06 11:28:52.622272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.877 [2024-12-06 11:28:52.622420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.877 [2024-12-06 11:28:52.622428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.877 [2024-12-06 11:28:52.622434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.877 [2024-12-06 11:28:52.622440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.877 [2024-12-06 11:28:52.634310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.877 [2024-12-06 11:28:52.634739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.877 [2024-12-06 11:28:52.634784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.877 [2024-12-06 11:28:52.634807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.877 [2024-12-06 11:28:52.635407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.877 [2024-12-06 11:28:52.635826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.877 [2024-12-06 11:28:52.635835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.877 [2024-12-06 11:28:52.635841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.877 [2024-12-06 11:28:52.635847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.877 [2024-12-06 11:28:52.646975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.877 [2024-12-06 11:28:52.647366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.877 [2024-12-06 11:28:52.647383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.877 [2024-12-06 11:28:52.647389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.877 [2024-12-06 11:28:52.647537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.877 [2024-12-06 11:28:52.647684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.877 [2024-12-06 11:28:52.647693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.877 [2024-12-06 11:28:52.647699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.877 [2024-12-06 11:28:52.647705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.877 [2024-12-06 11:28:52.659566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.877 [2024-12-06 11:28:52.659975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.877 [2024-12-06 11:28:52.659991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.877 [2024-12-06 11:28:52.659999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.877 [2024-12-06 11:28:52.660170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.877 [2024-12-06 11:28:52.660327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.877 [2024-12-06 11:28:52.660335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.877 [2024-12-06 11:28:52.660342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.877 [2024-12-06 11:28:52.660348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.877 [2024-12-06 11:28:52.672146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.877 [2024-12-06 11:28:52.672509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.877 [2024-12-06 11:28:52.672556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.877 [2024-12-06 11:28:52.672588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.877 [2024-12-06 11:28:52.673189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.877 [2024-12-06 11:28:52.673780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.877 [2024-12-06 11:28:52.673805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.877 [2024-12-06 11:28:52.673826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.877 [2024-12-06 11:28:52.673855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.877 [2024-12-06 11:28:52.687223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.877 [2024-12-06 11:28:52.687753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.877 [2024-12-06 11:28:52.687775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.877 [2024-12-06 11:28:52.687785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.877 [2024-12-06 11:28:52.688040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.877 [2024-12-06 11:28:52.688304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.877 [2024-12-06 11:28:52.688317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.877 [2024-12-06 11:28:52.688326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.877 [2024-12-06 11:28:52.688336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.877 [2024-12-06 11:28:52.700161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.877 [2024-12-06 11:28:52.700597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.877 [2024-12-06 11:28:52.700615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.877 [2024-12-06 11:28:52.700622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.878 [2024-12-06 11:28:52.700790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.878 [2024-12-06 11:28:52.700959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.878 [2024-12-06 11:28:52.700968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.878 [2024-12-06 11:28:52.700975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.878 [2024-12-06 11:28:52.700981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.878 [2024-12-06 11:28:52.712831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.878 [2024-12-06 11:28:52.713240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.878 [2024-12-06 11:28:52.713301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.878 [2024-12-06 11:28:52.713324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.878 [2024-12-06 11:28:52.713908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.878 [2024-12-06 11:28:52.714422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.878 [2024-12-06 11:28:52.714443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.878 [2024-12-06 11:28:52.714457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.878 [2024-12-06 11:28:52.714472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.878 [2024-12-06 11:28:52.727820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.878 [2024-12-06 11:28:52.728363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.878 [2024-12-06 11:28:52.728408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.878 [2024-12-06 11:28:52.728429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.878 [2024-12-06 11:28:52.728950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.878 [2024-12-06 11:28:52.729214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.878 [2024-12-06 11:28:52.729227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.878 [2024-12-06 11:28:52.729237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.878 [2024-12-06 11:28:52.729247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.878 [2024-12-06 11:28:52.740876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.878 [2024-12-06 11:28:52.741294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.878 [2024-12-06 11:28:52.741311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.878 [2024-12-06 11:28:52.741319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.878 [2024-12-06 11:28:52.741492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.878 [2024-12-06 11:28:52.741667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.878 [2024-12-06 11:28:52.741677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.878 [2024-12-06 11:28:52.741684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.878 [2024-12-06 11:28:52.741691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.878 [2024-12-06 11:28:52.753678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.878 [2024-12-06 11:28:52.754111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.878 [2024-12-06 11:28:52.754156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.878 [2024-12-06 11:28:52.754180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.878 [2024-12-06 11:28:52.754764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.878 [2024-12-06 11:28:52.755119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.878 [2024-12-06 11:28:52.755138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.878 [2024-12-06 11:28:52.755159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.878 [2024-12-06 11:28:52.755173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.878 [2024-12-06 11:28:52.768516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.878 [2024-12-06 11:28:52.769042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.878 [2024-12-06 11:28:52.769109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.878 [2024-12-06 11:28:52.769133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.878 [2024-12-06 11:28:52.769717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.878 [2024-12-06 11:28:52.770040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.878 [2024-12-06 11:28:52.770053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.878 [2024-12-06 11:28:52.770069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.878 [2024-12-06 11:28:52.770078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.878 [2024-12-06 11:28:52.781416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.878 [2024-12-06 11:28:52.781852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.878 [2024-12-06 11:28:52.781896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.878 [2024-12-06 11:28:52.781920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.878 [2024-12-06 11:28:52.782520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.878 [2024-12-06 11:28:52.783039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.878 [2024-12-06 11:28:52.783048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.878 [2024-12-06 11:28:52.783055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.878 [2024-12-06 11:28:52.783066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.878 [2024-12-06 11:28:52.794034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.878 [2024-12-06 11:28:52.794427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.878 [2024-12-06 11:28:52.794443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.878 [2024-12-06 11:28:52.794450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.878 [2024-12-06 11:28:52.794598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.878 [2024-12-06 11:28:52.794746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.878 [2024-12-06 11:28:52.794755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.878 [2024-12-06 11:28:52.794761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.878 [2024-12-06 11:28:52.794767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:19.878 [2024-12-06 11:28:52.806703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:19.878 [2024-12-06 11:28:52.807051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.878 [2024-12-06 11:28:52.807073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:19.878 [2024-12-06 11:28:52.807081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:19.878 [2024-12-06 11:28:52.807240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:19.878 [2024-12-06 11:28:52.807401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:19.878 [2024-12-06 11:28:52.807409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:19.878 [2024-12-06 11:28:52.807415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:19.878 [2024-12-06 11:28:52.807421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.139 [2024-12-06 11:28:52.819459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.139 [2024-12-06 11:28:52.819846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.139 [2024-12-06 11:28:52.819862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.139 [2024-12-06 11:28:52.819869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.139 [2024-12-06 11:28:52.820016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.139 [2024-12-06 11:28:52.820190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.139 [2024-12-06 11:28:52.820200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.139 [2024-12-06 11:28:52.820206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.139 [2024-12-06 11:28:52.820212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.139 [2024-12-06 11:28:52.832169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.139 [2024-12-06 11:28:52.832567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.139 [2024-12-06 11:28:52.832612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.139 [2024-12-06 11:28:52.832635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.139 [2024-12-06 11:28:52.833128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.139 [2024-12-06 11:28:52.833286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.139 [2024-12-06 11:28:52.833295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.139 [2024-12-06 11:28:52.833301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.139 [2024-12-06 11:28:52.833307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.139 [2024-12-06 11:28:52.844792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.139 [2024-12-06 11:28:52.845186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.139 [2024-12-06 11:28:52.845218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.139 [2024-12-06 11:28:52.845250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.139 [2024-12-06 11:28:52.845835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.139 [2024-12-06 11:28:52.846039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.139 [2024-12-06 11:28:52.846047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.139 [2024-12-06 11:28:52.846054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.139 [2024-12-06 11:28:52.846066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.139 [2024-12-06 11:28:52.859913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.139 [2024-12-06 11:28:52.860379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.139 [2024-12-06 11:28:52.860402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.139 [2024-12-06 11:28:52.860413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.139 [2024-12-06 11:28:52.860669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.139 [2024-12-06 11:28:52.860926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.139 [2024-12-06 11:28:52.860938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.139 [2024-12-06 11:28:52.860948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.139 [2024-12-06 11:28:52.860958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.139 [2024-12-06 11:28:52.872869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.139 [2024-12-06 11:28:52.873231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.139 [2024-12-06 11:28:52.873276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.139 [2024-12-06 11:28:52.873299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.139 [2024-12-06 11:28:52.873883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.139 [2024-12-06 11:28:52.874131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.139 [2024-12-06 11:28:52.874141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.139 [2024-12-06 11:28:52.874147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.139 [2024-12-06 11:28:52.874154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.139 [2024-12-06 11:28:52.885489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.139 [2024-12-06 11:28:52.885874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.139 [2024-12-06 11:28:52.885891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.139 [2024-12-06 11:28:52.885897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.139 [2024-12-06 11:28:52.886045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.139 [2024-12-06 11:28:52.886224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.139 [2024-12-06 11:28:52.886234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.140 [2024-12-06 11:28:52.886241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.140 [2024-12-06 11:28:52.886247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.140 [2024-12-06 11:28:52.898026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.140 [2024-12-06 11:28:52.898460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-06 11:28:52.898477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-06 11:28:52.898485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.140 [2024-12-06 11:28:52.898640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.140 [2024-12-06 11:28:52.898797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.140 [2024-12-06 11:28:52.898806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.140 [2024-12-06 11:28:52.898812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.140 [2024-12-06 11:28:52.898818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.140 [2024-12-06 11:28:52.910683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.140 [2024-12-06 11:28:52.911001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-06 11:28:52.911017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-06 11:28:52.911023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.140 [2024-12-06 11:28:52.911185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.140 [2024-12-06 11:28:52.911342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.140 [2024-12-06 11:28:52.911350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.140 [2024-12-06 11:28:52.911357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.140 [2024-12-06 11:28:52.911363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.140 [2024-12-06 11:28:52.923352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.140 [2024-12-06 11:28:52.923700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-06 11:28:52.923717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-06 11:28:52.923723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.140 [2024-12-06 11:28:52.923871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.140 [2024-12-06 11:28:52.924020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.140 [2024-12-06 11:28:52.924028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.140 [2024-12-06 11:28:52.924037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.140 [2024-12-06 11:28:52.924044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.140 [2024-12-06 11:28:52.936005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.140 [2024-12-06 11:28:52.936343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-06 11:28:52.936359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-06 11:28:52.936366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.140 [2024-12-06 11:28:52.936514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.140 [2024-12-06 11:28:52.936663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.140 [2024-12-06 11:28:52.936671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.140 [2024-12-06 11:28:52.936677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.140 [2024-12-06 11:28:52.936683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.140 [2024-12-06 11:28:52.948645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.140 [2024-12-06 11:28:52.949032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-06 11:28:52.949048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-06 11:28:52.949055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.140 [2024-12-06 11:28:52.949217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.140 [2024-12-06 11:28:52.949373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.140 [2024-12-06 11:28:52.949382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.140 [2024-12-06 11:28:52.949388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.140 [2024-12-06 11:28:52.949394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.140 [2024-12-06 11:28:52.961337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.140 [2024-12-06 11:28:52.961751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-06 11:28:52.961767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-06 11:28:52.961773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.140 [2024-12-06 11:28:52.961921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.140 [2024-12-06 11:28:52.962073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.140 [2024-12-06 11:28:52.962082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.140 [2024-12-06 11:28:52.962088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.140 [2024-12-06 11:28:52.962094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.140 [2024-12-06 11:28:52.973913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.140 [2024-12-06 11:28:52.974321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-06 11:28:52.974338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-06 11:28:52.974344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.140 [2024-12-06 11:28:52.974500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.140 [2024-12-06 11:28:52.974657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.140 [2024-12-06 11:28:52.974665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.140 [2024-12-06 11:28:52.974671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.140 [2024-12-06 11:28:52.974677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.140 [2024-12-06 11:28:52.986587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.140 [2024-12-06 11:28:52.986987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-06 11:28:52.987003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-06 11:28:52.987011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.140 [2024-12-06 11:28:52.987171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.140 [2024-12-06 11:28:52.987337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.140 [2024-12-06 11:28:52.987345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.140 [2024-12-06 11:28:52.987350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.140 [2024-12-06 11:28:52.987356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.140 [2024-12-06 11:28:52.999226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.140 [2024-12-06 11:28:52.999649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-06 11:28:52.999695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-06 11:28:52.999718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.140 [2024-12-06 11:28:53.000319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.141 [2024-12-06 11:28:53.000733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.141 [2024-12-06 11:28:53.000742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.141 [2024-12-06 11:28:53.000748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.141 [2024-12-06 11:28:53.000754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.141 [2024-12-06 11:28:53.011825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.141 [2024-12-06 11:28:53.012234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.141 [2024-12-06 11:28:53.012251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.141 [2024-12-06 11:28:53.012261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.141 [2024-12-06 11:28:53.012410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.141 [2024-12-06 11:28:53.012559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.141 [2024-12-06 11:28:53.012568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.141 [2024-12-06 11:28:53.012573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.141 [2024-12-06 11:28:53.012579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.141 [2024-12-06 11:28:53.024449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.141 [2024-12-06 11:28:53.024853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.141 [2024-12-06 11:28:53.024869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.141 [2024-12-06 11:28:53.024875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.141 [2024-12-06 11:28:53.025022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.141 [2024-12-06 11:28:53.025197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.141 [2024-12-06 11:28:53.025206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.141 [2024-12-06 11:28:53.025212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.141 [2024-12-06 11:28:53.025219] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.141 [2024-12-06 11:28:53.037066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.141 [2024-12-06 11:28:53.037479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.141 [2024-12-06 11:28:53.037495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.141 [2024-12-06 11:28:53.037502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.141 [2024-12-06 11:28:53.037649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.141 [2024-12-06 11:28:53.037798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.141 [2024-12-06 11:28:53.037806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.141 [2024-12-06 11:28:53.037812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.141 [2024-12-06 11:28:53.037817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.141 [2024-12-06 11:28:53.049685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.141 [2024-12-06 11:28:53.050111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.141 [2024-12-06 11:28:53.050155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.141 [2024-12-06 11:28:53.050179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.141 [2024-12-06 11:28:53.050722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.141 [2024-12-06 11:28:53.050876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.141 [2024-12-06 11:28:53.050886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.141 [2024-12-06 11:28:53.050892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.141 [2024-12-06 11:28:53.050898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.141 [2024-12-06 11:28:53.062297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.141 [2024-12-06 11:28:53.062707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.141 [2024-12-06 11:28:53.062763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.141 [2024-12-06 11:28:53.062787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.141 [2024-12-06 11:28:53.063385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.141 [2024-12-06 11:28:53.063572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.141 [2024-12-06 11:28:53.063581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.141 [2024-12-06 11:28:53.063587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.141 [2024-12-06 11:28:53.063593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.402 [2024-12-06 11:28:53.075079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.402 [2024-12-06 11:28:53.075447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.402 [2024-12-06 11:28:53.075465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.402 [2024-12-06 11:28:53.075473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.402 [2024-12-06 11:28:53.075633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.402 [2024-12-06 11:28:53.075795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.402 [2024-12-06 11:28:53.075805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.402 [2024-12-06 11:28:53.075811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.402 [2024-12-06 11:28:53.075817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.402 [2024-12-06 11:28:53.087733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.402 [2024-12-06 11:28:53.088164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.402 [2024-12-06 11:28:53.088212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.402 [2024-12-06 11:28:53.088236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.402 [2024-12-06 11:28:53.088767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.402 [2024-12-06 11:28:53.088917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.402 [2024-12-06 11:28:53.088925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.402 [2024-12-06 11:28:53.088935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.402 [2024-12-06 11:28:53.088941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.402 [2024-12-06 11:28:53.100289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.402 [2024-12-06 11:28:53.100658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.402 [2024-12-06 11:28:53.100674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.402 [2024-12-06 11:28:53.100681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.402 [2024-12-06 11:28:53.100829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.402 [2024-12-06 11:28:53.100977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.402 [2024-12-06 11:28:53.100986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.402 [2024-12-06 11:28:53.100992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.402 [2024-12-06 11:28:53.100998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.402 [2024-12-06 11:28:53.112860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.402 [2024-12-06 11:28:53.113254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.402 [2024-12-06 11:28:53.113300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.402 [2024-12-06 11:28:53.113323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.402 [2024-12-06 11:28:53.113830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.402 [2024-12-06 11:28:53.113980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.402 [2024-12-06 11:28:53.113988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.402 [2024-12-06 11:28:53.113994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.402 [2024-12-06 11:28:53.114000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.402 [2024-12-06 11:28:53.125404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.402 [2024-12-06 11:28:53.125835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.402 [2024-12-06 11:28:53.125880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.402 [2024-12-06 11:28:53.125903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.402 [2024-12-06 11:28:53.126500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.402 [2024-12-06 11:28:53.126952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.402 [2024-12-06 11:28:53.126960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.402 [2024-12-06 11:28:53.126966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.402 [2024-12-06 11:28:53.126972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.402 [2024-12-06 11:28:53.138044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.402 [2024-12-06 11:28:53.138465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.402 [2024-12-06 11:28:53.138501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.402 [2024-12-06 11:28:53.138527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.402 [2024-12-06 11:28:53.139080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.402 [2024-12-06 11:28:53.139252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.402 [2024-12-06 11:28:53.139261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.402 [2024-12-06 11:28:53.139267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.402 [2024-12-06 11:28:53.139272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.402 [2024-12-06 11:28:53.150649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.402 [2024-12-06 11:28:53.151061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.402 [2024-12-06 11:28:53.151078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.402 [2024-12-06 11:28:53.151086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.402 [2024-12-06 11:28:53.151234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.402 [2024-12-06 11:28:53.151383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.402 [2024-12-06 11:28:53.151391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.402 [2024-12-06 11:28:53.151397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.402 [2024-12-06 11:28:53.151403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.402 [2024-12-06 11:28:53.163335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.402 [2024-12-06 11:28:53.163754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.402 [2024-12-06 11:28:53.163770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.402 [2024-12-06 11:28:53.163778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.402 [2024-12-06 11:28:53.163925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.402 [2024-12-06 11:28:53.164081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.402 [2024-12-06 11:28:53.164105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.402 [2024-12-06 11:28:53.164112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.403 [2024-12-06 11:28:53.164118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.403 [2024-12-06 11:28:53.175879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.403 [2024-12-06 11:28:53.176296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.403 [2024-12-06 11:28:53.176312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.403 [2024-12-06 11:28:53.176322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.403 [2024-12-06 11:28:53.176478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.403 [2024-12-06 11:28:53.176635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.403 [2024-12-06 11:28:53.176644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.403 [2024-12-06 11:28:53.176650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.403 [2024-12-06 11:28:53.176656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.403 [2024-12-06 11:28:53.188484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.403 [2024-12-06 11:28:53.188883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.403 [2024-12-06 11:28:53.188928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.403 [2024-12-06 11:28:53.188952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.403 [2024-12-06 11:28:53.189549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.403 [2024-12-06 11:28:53.190030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.403 [2024-12-06 11:28:53.190039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.403 [2024-12-06 11:28:53.190045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.403 [2024-12-06 11:28:53.190051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.403 [2024-12-06 11:28:53.201119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.403 [2024-12-06 11:28:53.201456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.403 [2024-12-06 11:28:53.201472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.403 [2024-12-06 11:28:53.201498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.403 [2024-12-06 11:28:53.202098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.403 [2024-12-06 11:28:53.202300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.403 [2024-12-06 11:28:53.202309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.403 [2024-12-06 11:28:53.202315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.403 [2024-12-06 11:28:53.202321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.403 [2024-12-06 11:28:53.213817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.403 [2024-12-06 11:28:53.214230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.403 [2024-12-06 11:28:53.214247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.403 [2024-12-06 11:28:53.214254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.403 [2024-12-06 11:28:53.214401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.403 [2024-12-06 11:28:53.214550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.403 [2024-12-06 11:28:53.214562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.403 [2024-12-06 11:28:53.214568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.403 [2024-12-06 11:28:53.214573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.403 [2024-12-06 11:28:53.226446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.403 [2024-12-06 11:28:53.226782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.403 [2024-12-06 11:28:53.226798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.403 [2024-12-06 11:28:53.226805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.403 [2024-12-06 11:28:53.226952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.403 [2024-12-06 11:28:53.227128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.403 [2024-12-06 11:28:53.227137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.403 [2024-12-06 11:28:53.227143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.403 [2024-12-06 11:28:53.227150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.403 [2024-12-06 11:28:53.239077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.403 [2024-12-06 11:28:53.239416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.403 [2024-12-06 11:28:53.239433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.403 [2024-12-06 11:28:53.239440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.403 [2024-12-06 11:28:53.239597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.403 [2024-12-06 11:28:53.239753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.403 [2024-12-06 11:28:53.239762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.403 [2024-12-06 11:28:53.239767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.403 [2024-12-06 11:28:53.239774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.403 [2024-12-06 11:28:53.251785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.403 [2024-12-06 11:28:53.252175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.403 [2024-12-06 11:28:53.252221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.403 [2024-12-06 11:28:53.252245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.403 [2024-12-06 11:28:53.252831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.403 [2024-12-06 11:28:53.253062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.403 [2024-12-06 11:28:53.253071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.403 [2024-12-06 11:28:53.253078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.403 [2024-12-06 11:28:53.253086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.403 [2024-12-06 11:28:53.264350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.403 [2024-12-06 11:28:53.264775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.403 [2024-12-06 11:28:53.264820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.403 [2024-12-06 11:28:53.264843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.403 [2024-12-06 11:28:53.265291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.403 [2024-12-06 11:28:53.265449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.403 [2024-12-06 11:28:53.265458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.403 [2024-12-06 11:28:53.265464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.403 [2024-12-06 11:28:53.265470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.403 [2024-12-06 11:28:53.276929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.403 [2024-12-06 11:28:53.277281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.403 [2024-12-06 11:28:53.277325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.403 [2024-12-06 11:28:53.277349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.403 [2024-12-06 11:28:53.277943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.403 [2024-12-06 11:28:53.278114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.404 [2024-12-06 11:28:53.278123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.404 [2024-12-06 11:28:53.278130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.404 [2024-12-06 11:28:53.278136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.404 [2024-12-06 11:28:53.289573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.404 [2024-12-06 11:28:53.289991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.404 [2024-12-06 11:28:53.290035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.404 [2024-12-06 11:28:53.290284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.404 [2024-12-06 11:28:53.290877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.404 [2024-12-06 11:28:53.291377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.404 [2024-12-06 11:28:53.291386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.404 [2024-12-06 11:28:53.291392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.404 [2024-12-06 11:28:53.291398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.404 [2024-12-06 11:28:53.302177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.404 [2024-12-06 11:28:53.302529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.404 [2024-12-06 11:28:53.302545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.404 [2024-12-06 11:28:53.302552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.404 [2024-12-06 11:28:53.302700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.404 [2024-12-06 11:28:53.302848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.404 [2024-12-06 11:28:53.302857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.404 [2024-12-06 11:28:53.302863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.404 [2024-12-06 11:28:53.302869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.404 [2024-12-06 11:28:53.314738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.404 [2024-12-06 11:28:53.315148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.404 [2024-12-06 11:28:53.315165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.404 [2024-12-06 11:28:53.315172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.404 [2024-12-06 11:28:53.315318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.404 [2024-12-06 11:28:53.315468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.404 [2024-12-06 11:28:53.315476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.404 [2024-12-06 11:28:53.315482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.404 [2024-12-06 11:28:53.315487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.404 [2024-12-06 11:28:53.327370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.404 [2024-12-06 11:28:53.327786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.404 [2024-12-06 11:28:53.327803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.404 [2024-12-06 11:28:53.327809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.404 [2024-12-06 11:28:53.327964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.404 [2024-12-06 11:28:53.328126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.404 [2024-12-06 11:28:53.328135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.404 [2024-12-06 11:28:53.328142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.404 [2024-12-06 11:28:53.328148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.665 [2024-12-06 11:28:53.340054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.665 [2024-12-06 11:28:53.340474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.665 [2024-12-06 11:28:53.340519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.665 [2024-12-06 11:28:53.340542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.665 [2024-12-06 11:28:53.341158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.665 [2024-12-06 11:28:53.341317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.665 [2024-12-06 11:28:53.341326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.665 [2024-12-06 11:28:53.341332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.665 [2024-12-06 11:28:53.341338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.665 [2024-12-06 11:28:53.352650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.665 [2024-12-06 11:28:53.353079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.665 [2024-12-06 11:28:53.353124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.665 [2024-12-06 11:28:53.353148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.665 [2024-12-06 11:28:53.353733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.665 [2024-12-06 11:28:53.354334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.665 [2024-12-06 11:28:53.354361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.665 [2024-12-06 11:28:53.354392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.665 [2024-12-06 11:28:53.354398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.665 [2024-12-06 11:28:53.365267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.665 [2024-12-06 11:28:53.365687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.665 [2024-12-06 11:28:53.365730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.665 [2024-12-06 11:28:53.365754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.665 [2024-12-06 11:28:53.366352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.665 [2024-12-06 11:28:53.366591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.665 [2024-12-06 11:28:53.366600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.665 [2024-12-06 11:28:53.366606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.665 [2024-12-06 11:28:53.366613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.665 [2024-12-06 11:28:53.377861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.665 [2024-12-06 11:28:53.378252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.665 [2024-12-06 11:28:53.378268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.665 [2024-12-06 11:28:53.378274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.665 [2024-12-06 11:28:53.378423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.665 [2024-12-06 11:28:53.378571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.665 [2024-12-06 11:28:53.378583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.665 [2024-12-06 11:28:53.378589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.665 [2024-12-06 11:28:53.378595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.665 [2024-12-06 11:28:53.390411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.665 [2024-12-06 11:28:53.390827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.665 [2024-12-06 11:28:53.390843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.665 [2024-12-06 11:28:53.390850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.665 [2024-12-06 11:28:53.390998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.665 [2024-12-06 11:28:53.391168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.665 [2024-12-06 11:28:53.391178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.665 [2024-12-06 11:28:53.391184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.665 [2024-12-06 11:28:53.391190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.665 [2024-12-06 11:28:53.403045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.665 [2024-12-06 11:28:53.403368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.665 [2024-12-06 11:28:53.403384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.665 [2024-12-06 11:28:53.403390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.665 [2024-12-06 11:28:53.403538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.665 [2024-12-06 11:28:53.403686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.665 [2024-12-06 11:28:53.403694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.665 [2024-12-06 11:28:53.403700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.665 [2024-12-06 11:28:53.403706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.665 [2024-12-06 11:28:53.415612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.665 [2024-12-06 11:28:53.416036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.665 [2024-12-06 11:28:53.416093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.665 [2024-12-06 11:28:53.416119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.665 [2024-12-06 11:28:53.416603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.665 [2024-12-06 11:28:53.416752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.665 [2024-12-06 11:28:53.416761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.665 [2024-12-06 11:28:53.416767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.665 [2024-12-06 11:28:53.416776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.665 [2024-12-06 11:28:53.428223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.665 [2024-12-06 11:28:53.428629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.665 [2024-12-06 11:28:53.428677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.666 [2024-12-06 11:28:53.428700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.666 [2024-12-06 11:28:53.429295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.666 [2024-12-06 11:28:53.429523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.666 [2024-12-06 11:28:53.429532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.666 [2024-12-06 11:28:53.429538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.666 [2024-12-06 11:28:53.429543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.666 [2024-12-06 11:28:53.440881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.666 [2024-12-06 11:28:53.441296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.666 [2024-12-06 11:28:53.441312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.666 [2024-12-06 11:28:53.441319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.666 [2024-12-06 11:28:53.441476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.666 [2024-12-06 11:28:53.441631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.666 [2024-12-06 11:28:53.441640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.666 [2024-12-06 11:28:53.441647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.666 [2024-12-06 11:28:53.441653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.666 [2024-12-06 11:28:53.453425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.666 [2024-12-06 11:28:53.453858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.666 [2024-12-06 11:28:53.453874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.666 [2024-12-06 11:28:53.453880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.666 [2024-12-06 11:28:53.454030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.666 [2024-12-06 11:28:53.454204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.666 [2024-12-06 11:28:53.454213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.666 [2024-12-06 11:28:53.454220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.666 [2024-12-06 11:28:53.454226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.666 [2024-12-06 11:28:53.466212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.666 [2024-12-06 11:28:53.466608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.666 [2024-12-06 11:28:53.466623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.666 [2024-12-06 11:28:53.466630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.666 [2024-12-06 11:28:53.466785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.666 [2024-12-06 11:28:53.466941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.666 [2024-12-06 11:28:53.466949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.666 [2024-12-06 11:28:53.466955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.666 [2024-12-06 11:28:53.466962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.666 [2024-12-06 11:28:53.478930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.666 [2024-12-06 11:28:53.479360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.666 [2024-12-06 11:28:53.479376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.666 [2024-12-06 11:28:53.479383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.666 [2024-12-06 11:28:53.479530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.666 [2024-12-06 11:28:53.479679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.666 [2024-12-06 11:28:53.479688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.666 [2024-12-06 11:28:53.479693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.666 [2024-12-06 11:28:53.479699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.666 [2024-12-06 11:28:53.491520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.666 [2024-12-06 11:28:53.491923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.666 [2024-12-06 11:28:53.491939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.666 [2024-12-06 11:28:53.491946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.666 [2024-12-06 11:28:53.492099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.666 [2024-12-06 11:28:53.492271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.666 [2024-12-06 11:28:53.492280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.666 [2024-12-06 11:28:53.492286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.666 [2024-12-06 11:28:53.492292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.666 [2024-12-06 11:28:53.504083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.666 [2024-12-06 11:28:53.504426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.666 [2024-12-06 11:28:53.504442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.666 [2024-12-06 11:28:53.504448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.666 [2024-12-06 11:28:53.504599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.666 [2024-12-06 11:28:53.504748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.666 [2024-12-06 11:28:53.504757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.666 [2024-12-06 11:28:53.504763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.666 [2024-12-06 11:28:53.504769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.666 [2024-12-06 11:28:53.516783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.666 [2024-12-06 11:28:53.517177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.666 [2024-12-06 11:28:53.517195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.666 [2024-12-06 11:28:53.517203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.666 [2024-12-06 11:28:53.517368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.666 [2024-12-06 11:28:53.517517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.666 [2024-12-06 11:28:53.517525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.666 [2024-12-06 11:28:53.517531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.666 [2024-12-06 11:28:53.517537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.666 [2024-12-06 11:28:53.529520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.666 [2024-12-06 11:28:53.529927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.666 [2024-12-06 11:28:53.529944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.666 [2024-12-06 11:28:53.529952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.666 [2024-12-06 11:28:53.530112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.666 [2024-12-06 11:28:53.530269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.666 [2024-12-06 11:28:53.530278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.666 [2024-12-06 11:28:53.530284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.666 [2024-12-06 11:28:53.530290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.666 [2024-12-06 11:28:53.542322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.667 [2024-12-06 11:28:53.542745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.667 [2024-12-06 11:28:53.542762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.667 [2024-12-06 11:28:53.542770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.667 [2024-12-06 11:28:53.542930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.667 [2024-12-06 11:28:53.543095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.667 [2024-12-06 11:28:53.543108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.667 [2024-12-06 11:28:53.543115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.667 [2024-12-06 11:28:53.543122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.667 [2024-12-06 11:28:53.555133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.667 [2024-12-06 11:28:53.555532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.667 [2024-12-06 11:28:53.555548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.667 [2024-12-06 11:28:53.555555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.667 [2024-12-06 11:28:53.555714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.667 [2024-12-06 11:28:53.555875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.667 [2024-12-06 11:28:53.555884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.667 [2024-12-06 11:28:53.555890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.667 [2024-12-06 11:28:53.555897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.667 [2024-12-06 11:28:53.567856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.667 [2024-12-06 11:28:53.568203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.667 [2024-12-06 11:28:53.568220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.667 [2024-12-06 11:28:53.568227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.667 [2024-12-06 11:28:53.568382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.667 [2024-12-06 11:28:53.568538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.667 [2024-12-06 11:28:53.568547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.667 [2024-12-06 11:28:53.568554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.667 [2024-12-06 11:28:53.568560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.667 [2024-12-06 11:28:53.580702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.667 [2024-12-06 11:28:53.581027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.667 [2024-12-06 11:28:53.581043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.667 [2024-12-06 11:28:53.581050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.667 [2024-12-06 11:28:53.581214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.667 [2024-12-06 11:28:53.581376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.667 [2024-12-06 11:28:53.581384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.667 [2024-12-06 11:28:53.581391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.667 [2024-12-06 11:28:53.581400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.667 6319.20 IOPS, 24.68 MiB/s [2024-12-06T10:28:53.605Z] [2024-12-06 11:28:53.593339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.667 [2024-12-06 11:28:53.593680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.667 [2024-12-06 11:28:53.593697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.667 [2024-12-06 11:28:53.593704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.667 [2024-12-06 11:28:53.593860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.667 [2024-12-06 11:28:53.594016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.667 [2024-12-06 11:28:53.594025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.667 [2024-12-06 11:28:53.594031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.667 [2024-12-06 11:28:53.594037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.927 [2024-12-06 11:28:53.606108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.927 [2024-12-06 11:28:53.606511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-12-06 11:28:53.606528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.928 [2024-12-06 11:28:53.606535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.928 [2024-12-06 11:28:53.606695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.928 [2024-12-06 11:28:53.606856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.928 [2024-12-06 11:28:53.606865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.928 [2024-12-06 11:28:53.606871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.928 [2024-12-06 11:28:53.606877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.928 [2024-12-06 11:28:53.618845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.928 [2024-12-06 11:28:53.619124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-12-06 11:28:53.619141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.928 [2024-12-06 11:28:53.619148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.928 [2024-12-06 11:28:53.619303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.928 [2024-12-06 11:28:53.619460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.928 [2024-12-06 11:28:53.619468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.928 [2024-12-06 11:28:53.619474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.928 [2024-12-06 11:28:53.619480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.928 [2024-12-06 11:28:53.631542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.928 [2024-12-06 11:28:53.631890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-12-06 11:28:53.631941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.928 [2024-12-06 11:28:53.631965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.928 [2024-12-06 11:28:53.632565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.928 [2024-12-06 11:28:53.633166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.928 [2024-12-06 11:28:53.633199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.928 [2024-12-06 11:28:53.633205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.928 [2024-12-06 11:28:53.633211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.928 [2024-12-06 11:28:53.644280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.928 [2024-12-06 11:28:53.644540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-12-06 11:28:53.644556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.928 [2024-12-06 11:28:53.644563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.928 [2024-12-06 11:28:53.644718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.928 [2024-12-06 11:28:53.644873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.928 [2024-12-06 11:28:53.644882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.928 [2024-12-06 11:28:53.644888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.928 [2024-12-06 11:28:53.644894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.928 [2024-12-06 11:28:53.657002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.928 [2024-12-06 11:28:53.657320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-12-06 11:28:53.657336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.928 [2024-12-06 11:28:53.657343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.928 [2024-12-06 11:28:53.657490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.928 [2024-12-06 11:28:53.657639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.928 [2024-12-06 11:28:53.657648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.928 [2024-12-06 11:28:53.657654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.928 [2024-12-06 11:28:53.657659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.928 [2024-12-06 11:28:53.669633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.928 [2024-12-06 11:28:53.669952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-12-06 11:28:53.669968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.928 [2024-12-06 11:28:53.669975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.928 [2024-12-06 11:28:53.670152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.928 [2024-12-06 11:28:53.670308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.928 [2024-12-06 11:28:53.670317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.928 [2024-12-06 11:28:53.670323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.928 [2024-12-06 11:28:53.670329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.928 [2024-12-06 11:28:53.682292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.928 [2024-12-06 11:28:53.682660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-12-06 11:28:53.682705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.928 [2024-12-06 11:28:53.682729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.928 [2024-12-06 11:28:53.683230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.928 [2024-12-06 11:28:53.683387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.928 [2024-12-06 11:28:53.683396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.928 [2024-12-06 11:28:53.683402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.928 [2024-12-06 11:28:53.683408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.928 [2024-12-06 11:28:53.694999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.928 [2024-12-06 11:28:53.695340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-12-06 11:28:53.695357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.928 [2024-12-06 11:28:53.695364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.928 [2024-12-06 11:28:53.695513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.928 [2024-12-06 11:28:53.695661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.928 [2024-12-06 11:28:53.695670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.928 [2024-12-06 11:28:53.695675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.928 [2024-12-06 11:28:53.695681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.928 [2024-12-06 11:28:53.707649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.928 [2024-12-06 11:28:53.708014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-12-06 11:28:53.708030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.928 [2024-12-06 11:28:53.708038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.928 [2024-12-06 11:28:53.708199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.928 [2024-12-06 11:28:53.708356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.929 [2024-12-06 11:28:53.708367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.929 [2024-12-06 11:28:53.708373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.929 [2024-12-06 11:28:53.708379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.929 [2024-12-06 11:28:53.720455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.929 [2024-12-06 11:28:53.720786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-12-06 11:28:53.720802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.929 [2024-12-06 11:28:53.720809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.929 [2024-12-06 11:28:53.720956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.929 [2024-12-06 11:28:53.721126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.929 [2024-12-06 11:28:53.721136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.929 [2024-12-06 11:28:53.721142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.929 [2024-12-06 11:28:53.721148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.929 [2024-12-06 11:28:53.733103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.929 [2024-12-06 11:28:53.733432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-12-06 11:28:53.733448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.929 [2024-12-06 11:28:53.733455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.929 [2024-12-06 11:28:53.733610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.929 [2024-12-06 11:28:53.733767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.929 [2024-12-06 11:28:53.733776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.929 [2024-12-06 11:28:53.733782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.929 [2024-12-06 11:28:53.733788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.929 [2024-12-06 11:28:53.745721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.929 [2024-12-06 11:28:53.746048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-12-06 11:28:53.746105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.929 [2024-12-06 11:28:53.746130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.929 [2024-12-06 11:28:53.746613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.929 [2024-12-06 11:28:53.746763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.929 [2024-12-06 11:28:53.746771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.929 [2024-12-06 11:28:53.746777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.929 [2024-12-06 11:28:53.746783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.929 [2024-12-06 11:28:53.758388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.929 [2024-12-06 11:28:53.758721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-12-06 11:28:53.758737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.929 [2024-12-06 11:28:53.758743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.929 [2024-12-06 11:28:53.758891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.929 [2024-12-06 11:28:53.759040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.929 [2024-12-06 11:28:53.759048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.929 [2024-12-06 11:28:53.759054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.929 [2024-12-06 11:28:53.759065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.929 [2024-12-06 11:28:53.771030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.929 [2024-12-06 11:28:53.771368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-12-06 11:28:53.771384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.929 [2024-12-06 11:28:53.771390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.929 [2024-12-06 11:28:53.771538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.929 [2024-12-06 11:28:53.771687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.929 [2024-12-06 11:28:53.771696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.929 [2024-12-06 11:28:53.771702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.929 [2024-12-06 11:28:53.771708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.929 [2024-12-06 11:28:53.783760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.929 [2024-12-06 11:28:53.784097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-12-06 11:28:53.784113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.929 [2024-12-06 11:28:53.784120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.929 [2024-12-06 11:28:53.784269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.929 [2024-12-06 11:28:53.784417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.929 [2024-12-06 11:28:53.784425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.929 [2024-12-06 11:28:53.784431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.929 [2024-12-06 11:28:53.784437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.929 [2024-12-06 11:28:53.796408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.929 [2024-12-06 11:28:53.796760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-12-06 11:28:53.796779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.929 [2024-12-06 11:28:53.796785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.929 [2024-12-06 11:28:53.796933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.929 [2024-12-06 11:28:53.797086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.929 [2024-12-06 11:28:53.797095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.929 [2024-12-06 11:28:53.797101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.929 [2024-12-06 11:28:53.797106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.929 [2024-12-06 11:28:53.809080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.929 [2024-12-06 11:28:53.809514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-12-06 11:28:53.809558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.929 [2024-12-06 11:28:53.809581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.929 [2024-12-06 11:28:53.810175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.929 [2024-12-06 11:28:53.810419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.929 [2024-12-06 11:28:53.810437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.929 [2024-12-06 11:28:53.810451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.929 [2024-12-06 11:28:53.810465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.929 [2024-12-06 11:28:53.824066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.929 [2024-12-06 11:28:53.824437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-12-06 11:28:53.824459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.929 [2024-12-06 11:28:53.824469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.930 [2024-12-06 11:28:53.824724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.930 [2024-12-06 11:28:53.824982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.930 [2024-12-06 11:28:53.824995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.930 [2024-12-06 11:28:53.825004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.930 [2024-12-06 11:28:53.825014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.930 [2024-12-06 11:28:53.837077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.930 [2024-12-06 11:28:53.837418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-12-06 11:28:53.837435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.930 [2024-12-06 11:28:53.837442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.930 [2024-12-06 11:28:53.837615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.930 [2024-12-06 11:28:53.837786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.930 [2024-12-06 11:28:53.837795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.930 [2024-12-06 11:28:53.837801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.930 [2024-12-06 11:28:53.837807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.930 [2024-12-06 11:28:53.849818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.930 [2024-12-06 11:28:53.850151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-12-06 11:28:53.850167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.930 [2024-12-06 11:28:53.850174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:20.930 [2024-12-06 11:28:53.850323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:20.930 [2024-12-06 11:28:53.850471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.930 [2024-12-06 11:28:53.850480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.930 [2024-12-06 11:28:53.850486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.930 [2024-12-06 11:28:53.850491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.930 [2024-12-06 11:28:53.862654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.930 [2024-12-06 11:28:53.862996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-12-06 11:28:53.863013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:20.930 [2024-12-06 11:28:53.863020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.190 [2024-12-06 11:28:53.863184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.190 [2024-12-06 11:28:53.863345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.190 [2024-12-06 11:28:53.863354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.190 [2024-12-06 11:28:53.863360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.190 [2024-12-06 11:28:53.863366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.191 [2024-12-06 11:28:53.875467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.191 [2024-12-06 11:28:53.875808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-06 11:28:53.875825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.191 [2024-12-06 11:28:53.875832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.191 [2024-12-06 11:28:53.875987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.191 [2024-12-06 11:28:53.876147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.191 [2024-12-06 11:28:53.876157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.191 [2024-12-06 11:28:53.876167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.191 [2024-12-06 11:28:53.876174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.191 [2024-12-06 11:28:53.888108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.191 [2024-12-06 11:28:53.888529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-06 11:28:53.888545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.191 [2024-12-06 11:28:53.888552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.191 [2024-12-06 11:28:53.888707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.191 [2024-12-06 11:28:53.888863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.191 [2024-12-06 11:28:53.888872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.191 [2024-12-06 11:28:53.888878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.191 [2024-12-06 11:28:53.888884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.191 [2024-12-06 11:28:53.900714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.191 [2024-12-06 11:28:53.901051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-06 11:28:53.901073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.191 [2024-12-06 11:28:53.901080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.191 [2024-12-06 11:28:53.901228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.191 [2024-12-06 11:28:53.901376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.191 [2024-12-06 11:28:53.901384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.191 [2024-12-06 11:28:53.901390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.191 [2024-12-06 11:28:53.901396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.191 [2024-12-06 11:28:53.913383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.191 [2024-12-06 11:28:53.913711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-06 11:28:53.913727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.191 [2024-12-06 11:28:53.913733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.191 [2024-12-06 11:28:53.913881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.191 [2024-12-06 11:28:53.914029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.191 [2024-12-06 11:28:53.914037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.191 [2024-12-06 11:28:53.914043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.191 [2024-12-06 11:28:53.914049] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.191 [2024-12-06 11:28:53.926040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.191 [2024-12-06 11:28:53.926365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-06 11:28:53.926381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.191 [2024-12-06 11:28:53.926389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.191 [2024-12-06 11:28:53.926537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.191 [2024-12-06 11:28:53.926685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.191 [2024-12-06 11:28:53.926694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.191 [2024-12-06 11:28:53.926699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.191 [2024-12-06 11:28:53.926705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.191 [2024-12-06 11:28:53.938679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.191 [2024-12-06 11:28:53.938944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-06 11:28:53.938960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.191 [2024-12-06 11:28:53.938967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.191 [2024-12-06 11:28:53.939120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.191 [2024-12-06 11:28:53.939270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.191 [2024-12-06 11:28:53.939278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.191 [2024-12-06 11:28:53.939284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.191 [2024-12-06 11:28:53.939290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.191 [2024-12-06 11:28:53.951350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.191 [2024-12-06 11:28:53.951751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-06 11:28:53.951795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.191 [2024-12-06 11:28:53.951818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.191 [2024-12-06 11:28:53.952416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.191 [2024-12-06 11:28:53.952959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.191 [2024-12-06 11:28:53.952967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.191 [2024-12-06 11:28:53.952973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.191 [2024-12-06 11:28:53.952979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.191 [2024-12-06 11:28:53.964035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.191 [2024-12-06 11:28:53.964302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-06 11:28:53.964322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.191 [2024-12-06 11:28:53.964329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.191 [2024-12-06 11:28:53.964483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.191 [2024-12-06 11:28:53.964639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.191 [2024-12-06 11:28:53.964647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.191 [2024-12-06 11:28:53.964653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.191 [2024-12-06 11:28:53.964660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.191 [2024-12-06 11:28:53.976784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.191 [2024-12-06 11:28:53.977157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.191 [2024-12-06 11:28:53.977174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.191 [2024-12-06 11:28:53.977181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.192 [2024-12-06 11:28:53.977337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.192 [2024-12-06 11:28:53.977493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.192 [2024-12-06 11:28:53.977501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.192 [2024-12-06 11:28:53.977508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.192 [2024-12-06 11:28:53.977513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.192 [2024-12-06 11:28:53.989410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.192 [2024-12-06 11:28:53.989729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-06 11:28:53.989746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.192 [2024-12-06 11:28:53.989752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.192 [2024-12-06 11:28:53.989907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.192 [2024-12-06 11:28:53.990072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.192 [2024-12-06 11:28:53.990081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.192 [2024-12-06 11:28:53.990088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.192 [2024-12-06 11:28:53.990095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.192 [2024-12-06 11:28:54.002104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.192 [2024-12-06 11:28:54.002523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-06 11:28:54.002539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.192 [2024-12-06 11:28:54.002548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.192 [2024-12-06 11:28:54.002703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.192 [2024-12-06 11:28:54.002863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.192 [2024-12-06 11:28:54.002873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.192 [2024-12-06 11:28:54.002879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.192 [2024-12-06 11:28:54.002886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.192 [2024-12-06 11:28:54.014822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.192 [2024-12-06 11:28:54.015167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-06 11:28:54.015183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.192 [2024-12-06 11:28:54.015190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.192 [2024-12-06 11:28:54.015338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.192 [2024-12-06 11:28:54.015487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.192 [2024-12-06 11:28:54.015495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.192 [2024-12-06 11:28:54.015501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.192 [2024-12-06 11:28:54.015508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.192 [2024-12-06 11:28:54.027426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.192 [2024-12-06 11:28:54.027754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-06 11:28:54.027770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.192 [2024-12-06 11:28:54.027777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.192 [2024-12-06 11:28:54.027924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.192 [2024-12-06 11:28:54.028079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.192 [2024-12-06 11:28:54.028105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.192 [2024-12-06 11:28:54.028111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.192 [2024-12-06 11:28:54.028119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.192 [2024-12-06 11:28:54.040025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.192 [2024-12-06 11:28:54.040460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-06 11:28:54.040477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.192 [2024-12-06 11:28:54.040484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.192 [2024-12-06 11:28:54.040640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.192 [2024-12-06 11:28:54.040796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.192 [2024-12-06 11:28:54.040805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.192 [2024-12-06 11:28:54.040814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.192 [2024-12-06 11:28:54.040821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.192 [2024-12-06 11:28:54.052598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.192 [2024-12-06 11:28:54.052927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-06 11:28:54.052972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.192 [2024-12-06 11:28:54.052996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.192 [2024-12-06 11:28:54.053589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.192 [2024-12-06 11:28:54.054002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.192 [2024-12-06 11:28:54.054021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.192 [2024-12-06 11:28:54.054036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.192 [2024-12-06 11:28:54.054050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.192 [2024-12-06 11:28:54.067944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.192 [2024-12-06 11:28:54.068472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-06 11:28:54.068519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.192 [2024-12-06 11:28:54.068544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.192 [2024-12-06 11:28:54.069071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.192 [2024-12-06 11:28:54.069329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.192 [2024-12-06 11:28:54.069342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.192 [2024-12-06 11:28:54.069353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.192 [2024-12-06 11:28:54.069362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.192 [2024-12-06 11:28:54.080958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.192 [2024-12-06 11:28:54.081399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.192 [2024-12-06 11:28:54.081417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.192 [2024-12-06 11:28:54.081425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.192 [2024-12-06 11:28:54.081593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.192 [2024-12-06 11:28:54.081762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.192 [2024-12-06 11:28:54.081771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.192 [2024-12-06 11:28:54.081778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.192 [2024-12-06 11:28:54.081784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.192 [2024-12-06 11:28:54.093595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.192 [2024-12-06 11:28:54.094015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-06 11:28:54.094086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.193 [2024-12-06 11:28:54.094113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.193 [2024-12-06 11:28:54.094476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.193 [2024-12-06 11:28:54.094626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.193 [2024-12-06 11:28:54.094634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.193 [2024-12-06 11:28:54.094640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.193 [2024-12-06 11:28:54.094647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.193 [2024-12-06 11:28:54.106288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.193 [2024-12-06 11:28:54.106713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-06 11:28:54.106756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.193 [2024-12-06 11:28:54.106780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.193 [2024-12-06 11:28:54.107292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.193 [2024-12-06 11:28:54.107443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.193 [2024-12-06 11:28:54.107451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.193 [2024-12-06 11:28:54.107457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.193 [2024-12-06 11:28:54.107462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.193 [2024-12-06 11:28:54.118915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.193 [2024-12-06 11:28:54.119338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.193 [2024-12-06 11:28:54.119384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.193 [2024-12-06 11:28:54.119407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.193 [2024-12-06 11:28:54.119991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.193 [2024-12-06 11:28:54.120188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.193 [2024-12-06 11:28:54.120198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.193 [2024-12-06 11:28:54.120204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.193 [2024-12-06 11:28:54.120210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.454 [2024-12-06 11:28:54.131665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.454 [2024-12-06 11:28:54.132014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.454 [2024-12-06 11:28:54.132031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.454 [2024-12-06 11:28:54.132041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.454 [2024-12-06 11:28:54.132209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.454 [2024-12-06 11:28:54.132370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.454 [2024-12-06 11:28:54.132379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.454 [2024-12-06 11:28:54.132385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.454 [2024-12-06 11:28:54.132392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.454 [2024-12-06 11:28:54.144229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.454 [2024-12-06 11:28:54.144614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.454 [2024-12-06 11:28:54.144629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.454 [2024-12-06 11:28:54.144637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.454 [2024-12-06 11:28:54.144785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.454 [2024-12-06 11:28:54.144932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.454 [2024-12-06 11:28:54.144941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.454 [2024-12-06 11:28:54.144947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.454 [2024-12-06 11:28:54.144952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1881330 Killed "${NVMF_APP[@]}" "$@" 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:21.454 [2024-12-06 11:28:54.157097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.454 [2024-12-06 11:28:54.157513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.454 [2024-12-06 11:28:54.157530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.454 [2024-12-06 11:28:54.157537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.454 [2024-12-06 11:28:54.157696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.454 [2024-12-06 11:28:54.157857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.454 [2024-12-06 11:28:54.157865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.454 [2024-12-06 11:28:54.157872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.454 [2024-12-06 11:28:54.157878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1882863 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1882863 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1882863 ']' 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.454 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:21.454 [2024-12-06 11:28:54.169897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.454 [2024-12-06 11:28:54.170353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.454 [2024-12-06 11:28:54.170368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.454 [2024-12-06 11:28:54.170375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.454 [2024-12-06 11:28:54.170534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.454 [2024-12-06 11:28:54.170694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.454 [2024-12-06 11:28:54.170702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.454 [2024-12-06 11:28:54.170709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.454 [2024-12-06 11:28:54.170715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.454 [2024-12-06 11:28:54.182739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.454 [2024-12-06 11:28:54.183178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.454 [2024-12-06 11:28:54.183194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.454 [2024-12-06 11:28:54.183201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.454 [2024-12-06 11:28:54.183360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.454 [2024-12-06 11:28:54.183520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.454 [2024-12-06 11:28:54.183528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.454 [2024-12-06 11:28:54.183534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.454 [2024-12-06 11:28:54.183540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.454 [2024-12-06 11:28:54.195414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.454 [2024-12-06 11:28:54.195817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.455 [2024-12-06 11:28:54.195833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.455 [2024-12-06 11:28:54.195839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.455 [2024-12-06 11:28:54.195997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.455 [2024-12-06 11:28:54.196159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.455 [2024-12-06 11:28:54.196167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.455 [2024-12-06 11:28:54.196173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.455 [2024-12-06 11:28:54.196178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.455 [2024-12-06 11:28:54.206630] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:27:21.455 [2024-12-06 11:28:54.206666] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.455 [2024-12-06 11:28:54.208180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.455 [2024-12-06 11:28:54.208515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.455 [2024-12-06 11:28:54.208532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.455 [2024-12-06 11:28:54.208539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.455 [2024-12-06 11:28:54.208699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.455 [2024-12-06 11:28:54.208860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.455 [2024-12-06 11:28:54.208868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.455 [2024-12-06 11:28:54.208874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.455 [2024-12-06 11:28:54.208880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.455 [2024-12-06 11:28:54.221069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.455 [2024-12-06 11:28:54.221493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.455 [2024-12-06 11:28:54.221510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.455 [2024-12-06 11:28:54.221516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.455 [2024-12-06 11:28:54.221673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.455 [2024-12-06 11:28:54.221829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.455 [2024-12-06 11:28:54.221837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.455 [2024-12-06 11:28:54.221843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.455 [2024-12-06 11:28:54.221849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.455 [2024-12-06 11:28:54.233848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.455 [2024-12-06 11:28:54.234237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.455 [2024-12-06 11:28:54.234259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.455 [2024-12-06 11:28:54.234266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.455 [2024-12-06 11:28:54.234429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.455 [2024-12-06 11:28:54.234589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.455 [2024-12-06 11:28:54.234596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.455 [2024-12-06 11:28:54.234602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.455 [2024-12-06 11:28:54.234607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.455 [2024-12-06 11:28:54.246713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.455 [2024-12-06 11:28:54.247026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.455 [2024-12-06 11:28:54.247042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.455 [2024-12-06 11:28:54.247049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.455 [2024-12-06 11:28:54.247212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.455 [2024-12-06 11:28:54.247373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.455 [2024-12-06 11:28:54.247381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.455 [2024-12-06 11:28:54.247388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.455 [2024-12-06 11:28:54.247394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.455 [2024-12-06 11:28:54.259448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.455 [2024-12-06 11:28:54.259869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.455 [2024-12-06 11:28:54.259885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.455 [2024-12-06 11:28:54.259892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.455 [2024-12-06 11:28:54.260047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.455 [2024-12-06 11:28:54.260208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.455 [2024-12-06 11:28:54.260216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.455 [2024-12-06 11:28:54.260222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.455 [2024-12-06 11:28:54.260228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.455 [2024-12-06 11:28:54.272130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.455 [2024-12-06 11:28:54.272555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.455 [2024-12-06 11:28:54.272571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.455 [2024-12-06 11:28:54.272577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.455 [2024-12-06 11:28:54.272733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.455 [2024-12-06 11:28:54.272889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.455 [2024-12-06 11:28:54.272902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.455 [2024-12-06 11:28:54.272908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.455 [2024-12-06 11:28:54.272914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.455 [2024-12-06 11:28:54.281214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:21.455 [2024-12-06 11:28:54.284829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.455 [2024-12-06 11:28:54.285143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.455 [2024-12-06 11:28:54.285160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.455 [2024-12-06 11:28:54.285167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.455 [2024-12-06 11:28:54.285324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.455 [2024-12-06 11:28:54.285482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.455 [2024-12-06 11:28:54.285490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.455 [2024-12-06 11:28:54.285496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.455 [2024-12-06 11:28:54.285502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.455 [2024-12-06 11:28:54.297514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.455 [2024-12-06 11:28:54.297847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.455 [2024-12-06 11:28:54.297863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.455 [2024-12-06 11:28:54.297870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.455 [2024-12-06 11:28:54.298027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.456 [2024-12-06 11:28:54.298209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.456 [2024-12-06 11:28:54.298218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.456 [2024-12-06 11:28:54.298224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.456 [2024-12-06 11:28:54.298230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.456 [2024-12-06 11:28:54.310249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.456 [2024-12-06 11:28:54.310646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.456 [2024-12-06 11:28:54.310662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.456 [2024-12-06 11:28:54.310670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.456 [2024-12-06 11:28:54.310826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.456 [2024-12-06 11:28:54.310983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.456 [2024-12-06 11:28:54.310991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.456 [2024-12-06 11:28:54.311001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.456 [2024-12-06 11:28:54.311007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.456 [2024-12-06 11:28:54.318986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.456 [2024-12-06 11:28:54.319009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.456 [2024-12-06 11:28:54.319016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.456 [2024-12-06 11:28:54.319021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.456 [2024-12-06 11:28:54.319026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.456 [2024-12-06 11:28:54.320402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.456 [2024-12-06 11:28:54.320439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.456 [2024-12-06 11:28:54.320440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.456 [2024-12-06 11:28:54.323115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.456 [2024-12-06 11:28:54.323556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.456 [2024-12-06 11:28:54.323575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.456 [2024-12-06 11:28:54.323583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.456 [2024-12-06 11:28:54.323744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.456 [2024-12-06 11:28:54.323905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.456 [2024-12-06 11:28:54.323913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.456 [2024-12-06 11:28:54.323919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.456 [2024-12-06 11:28:54.323925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.456 [2024-12-06 11:28:54.335941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.456 [2024-12-06 11:28:54.336315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.456 [2024-12-06 11:28:54.336332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.456 [2024-12-06 11:28:54.336341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.456 [2024-12-06 11:28:54.336501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.456 [2024-12-06 11:28:54.336662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.456 [2024-12-06 11:28:54.336670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.456 [2024-12-06 11:28:54.336676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.456 [2024-12-06 11:28:54.336682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.456 [2024-12-06 11:28:54.348705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.456 [2024-12-06 11:28:54.349079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.456 [2024-12-06 11:28:54.349098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.456 [2024-12-06 11:28:54.349105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.456 [2024-12-06 11:28:54.349272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.456 [2024-12-06 11:28:54.349432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.456 [2024-12-06 11:28:54.349440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.456 [2024-12-06 11:28:54.349446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.456 [2024-12-06 11:28:54.349452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.456 [2024-12-06 11:28:54.361454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.456 [2024-12-06 11:28:54.361871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.456 [2024-12-06 11:28:54.361889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.456 [2024-12-06 11:28:54.361896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.456 [2024-12-06 11:28:54.362056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.456 [2024-12-06 11:28:54.362222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.456 [2024-12-06 11:28:54.362230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.456 [2024-12-06 11:28:54.362236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.456 [2024-12-06 11:28:54.362242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.456 [2024-12-06 11:28:54.374240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.456 [2024-12-06 11:28:54.374658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.456 [2024-12-06 11:28:54.374675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.456 [2024-12-06 11:28:54.374682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.456 [2024-12-06 11:28:54.374843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.456 [2024-12-06 11:28:54.375014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.456 [2024-12-06 11:28:54.375022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.456 [2024-12-06 11:28:54.375028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.456 [2024-12-06 11:28:54.375034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.456 [2024-12-06 11:28:54.387031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.456 [2024-12-06 11:28:54.387436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.456 [2024-12-06 11:28:54.387452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.456 [2024-12-06 11:28:54.387460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.456 [2024-12-06 11:28:54.387620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.456 [2024-12-06 11:28:54.387779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.456 [2024-12-06 11:28:54.387792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.456 [2024-12-06 11:28:54.387798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.456 [2024-12-06 11:28:54.387804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.717 [2024-12-06 11:28:54.399814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.717 [2024-12-06 11:28:54.400213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.717 [2024-12-06 11:28:54.400229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.717 [2024-12-06 11:28:54.400236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.717 [2024-12-06 11:28:54.400396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.717 [2024-12-06 11:28:54.400555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.717 [2024-12-06 11:28:54.400562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.717 [2024-12-06 11:28:54.400568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.717 [2024-12-06 11:28:54.400574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.717 [2024-12-06 11:28:54.412568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.717 [2024-12-06 11:28:54.412968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.717 [2024-12-06 11:28:54.412983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.717 [2024-12-06 11:28:54.412990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.717 [2024-12-06 11:28:54.413154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.717 [2024-12-06 11:28:54.413314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.717 [2024-12-06 11:28:54.413322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.717 [2024-12-06 11:28:54.413328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.717 [2024-12-06 11:28:54.413333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.717 [2024-12-06 11:28:54.425318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.717 [2024-12-06 11:28:54.425720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.717 [2024-12-06 11:28:54.425735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.717 [2024-12-06 11:28:54.425742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.717 [2024-12-06 11:28:54.425901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.717 [2024-12-06 11:28:54.426065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.717 [2024-12-06 11:28:54.426073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.717 [2024-12-06 11:28:54.426079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.717 [2024-12-06 11:28:54.426088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.717 [2024-12-06 11:28:54.438065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.717 [2024-12-06 11:28:54.438465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.717 [2024-12-06 11:28:54.438480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.717 [2024-12-06 11:28:54.438487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.717 [2024-12-06 11:28:54.438646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.717 [2024-12-06 11:28:54.438806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.717 [2024-12-06 11:28:54.438814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.717 [2024-12-06 11:28:54.438819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.717 [2024-12-06 11:28:54.438825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.717 [2024-12-06 11:28:54.450815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.717 [2024-12-06 11:28:54.451226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.717 [2024-12-06 11:28:54.451242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.717 [2024-12-06 11:28:54.451249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.717 [2024-12-06 11:28:54.451408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.717 [2024-12-06 11:28:54.451568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.717 [2024-12-06 11:28:54.451575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.717 [2024-12-06 11:28:54.451581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.717 [2024-12-06 11:28:54.451587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.717 [2024-12-06 11:28:54.463567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.718 [2024-12-06 11:28:54.463940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.718 [2024-12-06 11:28:54.463955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.718 [2024-12-06 11:28:54.463962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.718 [2024-12-06 11:28:54.464125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.718 [2024-12-06 11:28:54.464285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.718 [2024-12-06 11:28:54.464292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.718 [2024-12-06 11:28:54.464298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.718 [2024-12-06 11:28:54.464303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.718 [2024-12-06 11:28:54.476461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.718 [2024-12-06 11:28:54.476887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.718 [2024-12-06 11:28:54.476903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.718 [2024-12-06 11:28:54.476909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.718 [2024-12-06 11:28:54.477072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.718 [2024-12-06 11:28:54.477232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.718 [2024-12-06 11:28:54.477240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.718 [2024-12-06 11:28:54.477246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.718 [2024-12-06 11:28:54.477251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.718 [2024-12-06 11:28:54.489239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.718 [2024-12-06 11:28:54.489666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.718 [2024-12-06 11:28:54.489681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.718 [2024-12-06 11:28:54.489688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.718 [2024-12-06 11:28:54.489847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.718 [2024-12-06 11:28:54.490007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.718 [2024-12-06 11:28:54.490015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.718 [2024-12-06 11:28:54.490020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.718 [2024-12-06 11:28:54.490026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.718 [2024-12-06 11:28:54.502024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.718 [2024-12-06 11:28:54.502447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.718 [2024-12-06 11:28:54.502462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.718 [2024-12-06 11:28:54.502469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.718 [2024-12-06 11:28:54.502628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.718 [2024-12-06 11:28:54.502788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.718 [2024-12-06 11:28:54.502796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.718 [2024-12-06 11:28:54.502802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.718 [2024-12-06 11:28:54.502807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.718 [2024-12-06 11:28:54.514743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.718 [2024-12-06 11:28:54.515141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.718 [2024-12-06 11:28:54.515157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.718 [2024-12-06 11:28:54.515164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.718 [2024-12-06 11:28:54.515326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.718 [2024-12-06 11:28:54.515485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.718 [2024-12-06 11:28:54.515492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.718 [2024-12-06 11:28:54.515498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.718 [2024-12-06 11:28:54.515503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.718 [2024-12-06 11:28:54.527496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.718 [2024-12-06 11:28:54.527899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.718 [2024-12-06 11:28:54.527914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.718 [2024-12-06 11:28:54.527921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.718 [2024-12-06 11:28:54.528085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.718 [2024-12-06 11:28:54.528245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.718 [2024-12-06 11:28:54.528253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.718 [2024-12-06 11:28:54.528259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.718 [2024-12-06 11:28:54.528265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.718 [2024-12-06 11:28:54.540252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.718 [2024-12-06 11:28:54.540675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.718 [2024-12-06 11:28:54.540691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.718 [2024-12-06 11:28:54.540699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.718 [2024-12-06 11:28:54.540859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.718 [2024-12-06 11:28:54.541019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.718 [2024-12-06 11:28:54.541027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.718 [2024-12-06 11:28:54.541033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.718 [2024-12-06 11:28:54.541039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.718 [2024-12-06 11:28:54.553036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.718 [2024-12-06 11:28:54.553467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.718 [2024-12-06 11:28:54.553483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.718 [2024-12-06 11:28:54.553490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.718 [2024-12-06 11:28:54.553650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.718 [2024-12-06 11:28:54.553810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.718 [2024-12-06 11:28:54.553820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.718 [2024-12-06 11:28:54.553826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.718 [2024-12-06 11:28:54.553832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.718 [2024-12-06 11:28:54.565830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.718 [2024-12-06 11:28:54.566169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.718 [2024-12-06 11:28:54.566185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.718 [2024-12-06 11:28:54.566193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.718 [2024-12-06 11:28:54.566354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.718 [2024-12-06 11:28:54.566513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.718 [2024-12-06 11:28:54.566520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.719 [2024-12-06 11:28:54.566526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.719 [2024-12-06 11:28:54.566532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.719 [2024-12-06 11:28:54.578703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.719 [2024-12-06 11:28:54.579110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.719 [2024-12-06 11:28:54.579126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.719 [2024-12-06 11:28:54.579133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.719 [2024-12-06 11:28:54.579293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.719 [2024-12-06 11:28:54.579453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.719 [2024-12-06 11:28:54.579461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.719 [2024-12-06 11:28:54.579467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.719 [2024-12-06 11:28:54.579473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.719 5266.00 IOPS, 20.57 MiB/s [2024-12-06T10:28:54.657Z] [2024-12-06 11:28:54.591582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.719 [2024-12-06 11:28:54.591983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.719 [2024-12-06 11:28:54.591999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.719 [2024-12-06 11:28:54.592006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.719 [2024-12-06 11:28:54.592169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.719 [2024-12-06 11:28:54.592329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.719 [2024-12-06 11:28:54.592336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.719 [2024-12-06 11:28:54.592342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.719 [2024-12-06 11:28:54.592352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.719 [2024-12-06 11:28:54.604351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.719 [2024-12-06 11:28:54.604752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.719 [2024-12-06 11:28:54.604767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.719 [2024-12-06 11:28:54.604774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.719 [2024-12-06 11:28:54.604933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.719 [2024-12-06 11:28:54.605097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.719 [2024-12-06 11:28:54.605105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.719 [2024-12-06 11:28:54.605111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.719 [2024-12-06 11:28:54.605117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.719 [2024-12-06 11:28:54.617102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.719 [2024-12-06 11:28:54.617494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.719 [2024-12-06 11:28:54.617509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.719 [2024-12-06 11:28:54.617515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.719 [2024-12-06 11:28:54.617674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.719 [2024-12-06 11:28:54.617834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.719 [2024-12-06 11:28:54.617842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.719 [2024-12-06 11:28:54.617848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.719 [2024-12-06 11:28:54.617853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.719 [2024-12-06 11:28:54.629834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.719 [2024-12-06 11:28:54.630237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.719 [2024-12-06 11:28:54.630252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.719 [2024-12-06 11:28:54.630259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.719 [2024-12-06 11:28:54.630419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.719 [2024-12-06 11:28:54.630579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.719 [2024-12-06 11:28:54.630587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.719 [2024-12-06 11:28:54.630593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.719 [2024-12-06 11:28:54.630599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.719 [2024-12-06 11:28:54.642593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.719 [2024-12-06 11:28:54.642910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.719 [2024-12-06 11:28:54.642925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.719 [2024-12-06 11:28:54.642932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.719 [2024-12-06 11:28:54.643095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.719 [2024-12-06 11:28:54.643254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.719 [2024-12-06 11:28:54.643261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.719 [2024-12-06 11:28:54.643267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.719 [2024-12-06 11:28:54.643273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.980 [2024-12-06 11:28:54.655413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.980 [2024-12-06 11:28:54.655764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.980 [2024-12-06 11:28:54.655779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.980 [2024-12-06 11:28:54.655785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.980 [2024-12-06 11:28:54.655944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.980 [2024-12-06 11:28:54.656108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.980 [2024-12-06 11:28:54.656116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.980 [2024-12-06 11:28:54.656122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.980 [2024-12-06 11:28:54.656128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.980 [2024-12-06 11:28:54.668268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.980 [2024-12-06 11:28:54.668666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.980 [2024-12-06 11:28:54.668682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.980 [2024-12-06 11:28:54.668689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.980 [2024-12-06 11:28:54.668848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.980 [2024-12-06 11:28:54.669008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.980 [2024-12-06 11:28:54.669016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.980 [2024-12-06 11:28:54.669021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.980 [2024-12-06 11:28:54.669027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.980 [2024-12-06 11:28:54.681028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.980 [2024-12-06 11:28:54.681436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.980 [2024-12-06 11:28:54.681452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.980 [2024-12-06 11:28:54.681459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.980 [2024-12-06 11:28:54.681621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.980 [2024-12-06 11:28:54.681782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.980 [2024-12-06 11:28:54.681790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.980 [2024-12-06 11:28:54.681795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.980 [2024-12-06 11:28:54.681802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.980 [2024-12-06 11:28:54.693798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.980 [2024-12-06 11:28:54.694174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.980 [2024-12-06 11:28:54.694190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.980 [2024-12-06 11:28:54.694197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.980 [2024-12-06 11:28:54.694357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.980 [2024-12-06 11:28:54.694516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.980 [2024-12-06 11:28:54.694524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.980 [2024-12-06 11:28:54.694529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.980 [2024-12-06 11:28:54.694535] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.980 [2024-12-06 11:28:54.706536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.980 [2024-12-06 11:28:54.706937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.980 [2024-12-06 11:28:54.706952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.980 [2024-12-06 11:28:54.706959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.980 [2024-12-06 11:28:54.707122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.980 [2024-12-06 11:28:54.707283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.980 [2024-12-06 11:28:54.707290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.980 [2024-12-06 11:28:54.707296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.980 [2024-12-06 11:28:54.707302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.980 [2024-12-06 11:28:54.719295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.980 [2024-12-06 11:28:54.719688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.980 [2024-12-06 11:28:54.719704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.980 [2024-12-06 11:28:54.719711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.980 [2024-12-06 11:28:54.719870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.980 [2024-12-06 11:28:54.720030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.980 [2024-12-06 11:28:54.720040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.980 [2024-12-06 11:28:54.720046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.980 [2024-12-06 11:28:54.720052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.980 [2024-12-06 11:28:54.732034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.980 [2024-12-06 11:28:54.732458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.980 [2024-12-06 11:28:54.732474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.980 [2024-12-06 11:28:54.732480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.981 [2024-12-06 11:28:54.732640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.981 [2024-12-06 11:28:54.732799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.981 [2024-12-06 11:28:54.732807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.981 [2024-12-06 11:28:54.732813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.981 [2024-12-06 11:28:54.732819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.981 [2024-12-06 11:28:54.744806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.981 [2024-12-06 11:28:54.745226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.981 [2024-12-06 11:28:54.745242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.981 [2024-12-06 11:28:54.745248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.981 [2024-12-06 11:28:54.745408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.981 [2024-12-06 11:28:54.745568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.981 [2024-12-06 11:28:54.745575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.981 [2024-12-06 11:28:54.745581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.981 [2024-12-06 11:28:54.745586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.981 [2024-12-06 11:28:54.757571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.981 [2024-12-06 11:28:54.757967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.981 [2024-12-06 11:28:54.757983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.981 [2024-12-06 11:28:54.757989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.981 [2024-12-06 11:28:54.758153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.981 [2024-12-06 11:28:54.758314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.981 [2024-12-06 11:28:54.758321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.981 [2024-12-06 11:28:54.758327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.981 [2024-12-06 11:28:54.758336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.981 [2024-12-06 11:28:54.770327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.981 [2024-12-06 11:28:54.770726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.981 [2024-12-06 11:28:54.770742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.981 [2024-12-06 11:28:54.770749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.981 [2024-12-06 11:28:54.770909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.981 [2024-12-06 11:28:54.771073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.981 [2024-12-06 11:28:54.771081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.981 [2024-12-06 11:28:54.771087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.981 [2024-12-06 11:28:54.771093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.981 [2024-12-06 11:28:54.783089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.981 [2024-12-06 11:28:54.783489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.981 [2024-12-06 11:28:54.783505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.981 [2024-12-06 11:28:54.783511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.981 [2024-12-06 11:28:54.783670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.981 [2024-12-06 11:28:54.783830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.981 [2024-12-06 11:28:54.783837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.981 [2024-12-06 11:28:54.783843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.981 [2024-12-06 11:28:54.783849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.981 [2024-12-06 11:28:54.795839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.981 [2024-12-06 11:28:54.796240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.981 [2024-12-06 11:28:54.796256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.981 [2024-12-06 11:28:54.796263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.981 [2024-12-06 11:28:54.796422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.981 [2024-12-06 11:28:54.796582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.981 [2024-12-06 11:28:54.796589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.981 [2024-12-06 11:28:54.796596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.981 [2024-12-06 11:28:54.796602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.981 [2024-12-06 11:28:54.808589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.981 [2024-12-06 11:28:54.808985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.981 [2024-12-06 11:28:54.809002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.981 [2024-12-06 11:28:54.809009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.981 [2024-12-06 11:28:54.809172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.981 [2024-12-06 11:28:54.809336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.981 [2024-12-06 11:28:54.809343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.981 [2024-12-06 11:28:54.809349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.981 [2024-12-06 11:28:54.809355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.981 [2024-12-06 11:28:54.821340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.981 [2024-12-06 11:28:54.821692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.981 [2024-12-06 11:28:54.821707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.981 [2024-12-06 11:28:54.821714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.981 [2024-12-06 11:28:54.821873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.981 [2024-12-06 11:28:54.822033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.981 [2024-12-06 11:28:54.822040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.981 [2024-12-06 11:28:54.822046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.981 [2024-12-06 11:28:54.822052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.981 [2024-12-06 11:28:54.834192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.981 [2024-12-06 11:28:54.834588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.981 [2024-12-06 11:28:54.834603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.981 [2024-12-06 11:28:54.834610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.981 [2024-12-06 11:28:54.834769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.981 [2024-12-06 11:28:54.834929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.981 [2024-12-06 11:28:54.834937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.981 [2024-12-06 11:28:54.834943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.981 [2024-12-06 11:28:54.834948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.981 [2024-12-06 11:28:54.846941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.981 [2024-12-06 11:28:54.847373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.981 [2024-12-06 11:28:54.847388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.982 [2024-12-06 11:28:54.847396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.982 [2024-12-06 11:28:54.847557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.982 [2024-12-06 11:28:54.847716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.982 [2024-12-06 11:28:54.847724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.982 [2024-12-06 11:28:54.847730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.982 [2024-12-06 11:28:54.847735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.982 [2024-12-06 11:28:54.859738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.982 [2024-12-06 11:28:54.860162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.982 [2024-12-06 11:28:54.860178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.982 [2024-12-06 11:28:54.860184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.982 [2024-12-06 11:28:54.860344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.982 [2024-12-06 11:28:54.860503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.982 [2024-12-06 11:28:54.860511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.982 [2024-12-06 11:28:54.860517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.982 [2024-12-06 11:28:54.860522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.982 [2024-12-06 11:28:54.872515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.982 [2024-12-06 11:28:54.872942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.982 [2024-12-06 11:28:54.872957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.982 [2024-12-06 11:28:54.872963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.982 [2024-12-06 11:28:54.873126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.982 [2024-12-06 11:28:54.873285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.982 [2024-12-06 11:28:54.873293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.982 [2024-12-06 11:28:54.873298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.982 [2024-12-06 11:28:54.873304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.982 [2024-12-06 11:28:54.885307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.982 [2024-12-06 11:28:54.885737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.982 [2024-12-06 11:28:54.885753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.982 [2024-12-06 11:28:54.885759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.982 [2024-12-06 11:28:54.885918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.982 [2024-12-06 11:28:54.886083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.982 [2024-12-06 11:28:54.886096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.982 [2024-12-06 11:28:54.886102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.982 [2024-12-06 11:28:54.886108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.982 [2024-12-06 11:28:54.898126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.982 [2024-12-06 11:28:54.898531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.982 [2024-12-06 11:28:54.898547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.982 [2024-12-06 11:28:54.898553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.982 [2024-12-06 11:28:54.898712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.982 [2024-12-06 11:28:54.898872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.982 [2024-12-06 11:28:54.898879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.982 [2024-12-06 11:28:54.898885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.982 [2024-12-06 11:28:54.898891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.982 [2024-12-06 11:28:54.910885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.982 [2024-12-06 11:28:54.911198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.982 [2024-12-06 11:28:54.911214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:21.982 [2024-12-06 11:28:54.911221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:21.982 [2024-12-06 11:28:54.911380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:21.982 [2024-12-06 11:28:54.911540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.982 [2024-12-06 11:28:54.911548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.982 [2024-12-06 11:28:54.911553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.982 [2024-12-06 11:28:54.911559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.243 [2024-12-06 11:28:54.923705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.243 [2024-12-06 11:28:54.924127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-12-06 11:28:54.924142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.243 [2024-12-06 11:28:54.924149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.243 [2024-12-06 11:28:54.924308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.243 [2024-12-06 11:28:54.924468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.243 [2024-12-06 11:28:54.924476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.243 [2024-12-06 11:28:54.924482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.243 [2024-12-06 11:28:54.924487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.243 [2024-12-06 11:28:54.936487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.243 [2024-12-06 11:28:54.936913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-12-06 11:28:54.936929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.243 [2024-12-06 11:28:54.936937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.243 [2024-12-06 11:28:54.937101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.243 [2024-12-06 11:28:54.937262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.243 [2024-12-06 11:28:54.937270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.243 [2024-12-06 11:28:54.937277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.243 [2024-12-06 11:28:54.937283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.243 [2024-12-06 11:28:54.949270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.243 [2024-12-06 11:28:54.949603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-12-06 11:28:54.949619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.243 [2024-12-06 11:28:54.949625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.243 [2024-12-06 11:28:54.949785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.243 [2024-12-06 11:28:54.949945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.243 [2024-12-06 11:28:54.949953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.243 [2024-12-06 11:28:54.949959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.243 [2024-12-06 11:28:54.949964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.243 [2024-12-06 11:28:54.962117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.243 [2024-12-06 11:28:54.962535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-12-06 11:28:54.962550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.243 [2024-12-06 11:28:54.962557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.243 [2024-12-06 11:28:54.962717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.243 [2024-12-06 11:28:54.962877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.243 [2024-12-06 11:28:54.962885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.243 [2024-12-06 11:28:54.962892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.243 [2024-12-06 11:28:54.962899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.243 [2024-12-06 11:28:54.974898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.243 [2024-12-06 11:28:54.975298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-12-06 11:28:54.975316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.243 [2024-12-06 11:28:54.975323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.243 [2024-12-06 11:28:54.975482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.243 [2024-12-06 11:28:54.975642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.243 [2024-12-06 11:28:54.975650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.243 [2024-12-06 11:28:54.975656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.243 [2024-12-06 11:28:54.975662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.243 [2024-12-06 11:28:54.987662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.243 [2024-12-06 11:28:54.988106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-12-06 11:28:54.988121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.243 [2024-12-06 11:28:54.988128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.243 [2024-12-06 11:28:54.988286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.243 [2024-12-06 11:28:54.988447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.243 [2024-12-06 11:28:54.988454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.243 [2024-12-06 11:28:54.988460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.243 [2024-12-06 11:28:54.988466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.243 [2024-12-06 11:28:55.000468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.243 [2024-12-06 11:28:55.000879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-12-06 11:28:55.000895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.243 [2024-12-06 11:28:55.000902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.243 [2024-12-06 11:28:55.001066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.243 [2024-12-06 11:28:55.001227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.244 [2024-12-06 11:28:55.001234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.244 [2024-12-06 11:28:55.001240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.244 [2024-12-06 11:28:55.001246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.244 [2024-12-06 11:28:55.013247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.244 [2024-12-06 11:28:55.013655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.244 [2024-12-06 11:28:55.013673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.244 [2024-12-06 11:28:55.013681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.244 [2024-12-06 11:28:55.013840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.244 [2024-12-06 11:28:55.014000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.244 [2024-12-06 11:28:55.014008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.244 [2024-12-06 11:28:55.014015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.244 [2024-12-06 11:28:55.014021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.244 [2024-12-06 11:28:55.026062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.244 [2024-12-06 11:28:55.026422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.244 [2024-12-06 11:28:55.026438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.244 [2024-12-06 11:28:55.026445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.244 [2024-12-06 11:28:55.026603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.244 [2024-12-06 11:28:55.026764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.244 [2024-12-06 11:28:55.026773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.244 [2024-12-06 11:28:55.026779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.244 [2024-12-06 11:28:55.026784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.244 [2024-12-06 11:28:55.038797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.244 [2024-12-06 11:28:55.039191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.244 [2024-12-06 11:28:55.039208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.244 [2024-12-06 11:28:55.039215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.244 [2024-12-06 11:28:55.039378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.244 [2024-12-06 11:28:55.039537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.244 [2024-12-06 11:28:55.039545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.244 [2024-12-06 11:28:55.039551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.244 [2024-12-06 11:28:55.039557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.244 [2024-12-06 11:28:55.049906] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.244 [2024-12-06 11:28:55.051557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.244 [2024-12-06 11:28:55.052004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.244 [2024-12-06 11:28:55.052019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.244 [2024-12-06 11:28:55.052027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.244 [2024-12-06 11:28:55.052191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.244 [2024-12-06 11:28:55.052352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.244 [2024-12-06 11:28:55.052360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.244 [2024-12-06 11:28:55.052366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.244 [2024-12-06 11:28:55.052371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.244 [2024-12-06 11:28:55.064377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.244 [2024-12-06 11:28:55.064652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.244 [2024-12-06 11:28:55.064667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.244 [2024-12-06 11:28:55.064674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.244 [2024-12-06 11:28:55.064833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.244 [2024-12-06 11:28:55.064993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.244 [2024-12-06 11:28:55.065000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.244 [2024-12-06 11:28:55.065007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.244 [2024-12-06 11:28:55.065013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.244 [2024-12-06 11:28:55.077218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.244 [2024-12-06 11:28:55.077544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.244 [2024-12-06 11:28:55.077560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.244 [2024-12-06 11:28:55.077567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.244 [2024-12-06 11:28:55.077728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.244 [2024-12-06 11:28:55.077889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.244 [2024-12-06 11:28:55.077897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.244 [2024-12-06 11:28:55.077904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.244 [2024-12-06 11:28:55.077913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.244 Malloc0 00:27:22.244 [2024-12-06 11:28:55.089971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.244 [2024-12-06 11:28:55.090328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.244 [2024-12-06 11:28:55.090345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.244 [2024-12-06 11:28:55.090352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.244 [2024-12-06 11:28:55.090512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.244 [2024-12-06 11:28:55.090673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.244 [2024-12-06 11:28:55.090681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.244 [2024-12-06 11:28:55.090688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.244 [2024-12-06 11:28:55.090694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.244 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.245 [2024-12-06 11:28:55.102722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.245 [2024-12-06 11:28:55.102969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.245 [2024-12-06 11:28:55.102987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91b630 with addr=10.0.0.2, port=4420 00:27:22.245 [2024-12-06 11:28:55.102994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91b630 is same with the state(6) to be set 00:27:22.245 [2024-12-06 11:28:55.103158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91b630 (9): Bad file descriptor 00:27:22.245 [2024-12-06 11:28:55.103317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.245 [2024-12-06 11:28:55.103324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.245 [2024-12-06 11:28:55.103330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.245 [2024-12-06 11:28:55.103336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.245 [2024-12-06 11:28:55.113226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.245 [2024-12-06 11:28:55.115502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.245 11:28:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1881806 00:27:22.245 [2024-12-06 11:28:55.139942] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:23.749 5309.00 IOPS, 20.74 MiB/s [2024-12-06T10:28:57.624Z] 6182.50 IOPS, 24.15 MiB/s [2024-12-06T10:28:59.003Z] 6873.56 IOPS, 26.85 MiB/s [2024-12-06T10:28:59.939Z] 7440.40 IOPS, 29.06 MiB/s [2024-12-06T10:29:00.877Z] 7888.73 IOPS, 30.82 MiB/s [2024-12-06T10:29:01.814Z] 8276.33 IOPS, 32.33 MiB/s [2024-12-06T10:29:02.752Z] 8606.62 IOPS, 33.62 MiB/s [2024-12-06T10:29:03.688Z] 8868.79 IOPS, 34.64 MiB/s [2024-12-06T10:29:03.688Z] 9112.80 IOPS, 35.60 MiB/s 00:27:30.750 Latency(us) 00:27:30.750 [2024-12-06T10:29:03.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.750 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:30.750 Verification LBA range: start 0x0 length 0x4000 00:27:30.750 Nvme1n1 : 15.01 9115.92 35.61 13786.32 0.00 5570.74 413.32 15728.64 00:27:30.750 [2024-12-06T10:29:03.688Z] =================================================================================================================== 00:27:30.750 [2024-12-06T10:29:03.688Z] Total : 9115.92 35.61 13786.32 0.00 5570.74 413.32 15728.64 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.009 rmmod nvme_tcp 00:27:31.009 rmmod nvme_fabrics 00:27:31.009 rmmod nvme_keyring 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1882863 ']' 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1882863 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1882863 ']' 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1882863 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1882863 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1882863' 00:27:31.009 killing process with pid 1882863 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1882863 00:27:31.009 11:29:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1882863 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.269 11:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.802 11:29:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:33.802 00:27:33.802 real 0m26.248s 00:27:33.802 user 1m1.272s 00:27:33.802 sys 0m6.875s 00:27:33.802 11:29:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.802 11:29:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.802 ************************************ 00:27:33.802 END TEST nvmf_bdevperf 00:27:33.802 ************************************ 00:27:33.802 11:29:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:33.802 11:29:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:33.802 11:29:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:33.802 11:29:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.802 ************************************ 00:27:33.802 START TEST nvmf_target_disconnect 00:27:33.803 ************************************ 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:33.803 * Looking for test storage... 00:27:33.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:33.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.803 --rc genhtml_branch_coverage=1 00:27:33.803 --rc genhtml_function_coverage=1 00:27:33.803 --rc genhtml_legend=1 00:27:33.803 --rc geninfo_all_blocks=1 00:27:33.803 --rc geninfo_unexecuted_blocks=1 00:27:33.803 00:27:33.803 ' 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:33.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.803 --rc genhtml_branch_coverage=1 00:27:33.803 --rc genhtml_function_coverage=1 00:27:33.803 --rc genhtml_legend=1 00:27:33.803 --rc geninfo_all_blocks=1 00:27:33.803 --rc geninfo_unexecuted_blocks=1 00:27:33.803 00:27:33.803 ' 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:33.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.803 --rc genhtml_branch_coverage=1 00:27:33.803 --rc genhtml_function_coverage=1 00:27:33.803 --rc genhtml_legend=1 00:27:33.803 --rc geninfo_all_blocks=1 00:27:33.803 --rc geninfo_unexecuted_blocks=1 00:27:33.803 00:27:33.803 ' 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:33.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.803 --rc genhtml_branch_coverage=1 00:27:33.803 --rc genhtml_function_coverage=1 00:27:33.803 --rc genhtml_legend=1 00:27:33.803 --rc geninfo_all_blocks=1 00:27:33.803 --rc geninfo_unexecuted_blocks=1 00:27:33.803 00:27:33.803 ' 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.803 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:33.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:33.804 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:40.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:40.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:40.374 Found net devices under 0000:af:00.0: cvl_0_0 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:40.374 Found net devices under 0000:af:00.1: cvl_0_1 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.374 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:27:40.375 00:27:40.375 --- 10.0.0.2 ping statistics --- 00:27:40.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.375 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:27:40.375 00:27:40.375 --- 10.0.0.1 ping statistics --- 00:27:40.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.375 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:40.375 ************************************ 00:27:40.375 START TEST nvmf_target_disconnect_tc1 00:27:40.375 ************************************ 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:40.375 [2024-12-06 11:29:12.515405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 11:29:12.515444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1567470 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 11:29:12.515469] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:40.375 [2024-12-06 11:29:12.515478] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:40.375 [2024-12-06 11:29:12.515483] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:40.375 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:40.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:40.375 Initializing NVMe Controllers 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:40.375 00:27:40.375 real 0m0.115s 00:27:40.375 user 0m0.047s 00:27:40.375 sys 0m0.068s 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:40.375 ************************************ 00:27:40.375 END TEST nvmf_target_disconnect_tc1 00:27:40.375 ************************************ 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:40.375 ************************************ 00:27:40.375 START TEST nvmf_target_disconnect_tc2 00:27:40.375 ************************************ 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1888225 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1888225 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1888225 ']' 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.375 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.376 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.376 11:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.376 [2024-12-06 11:29:12.652062] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:27:40.376 [2024-12-06 11:29:12.652105] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.376 [2024-12-06 11:29:12.727644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.376 [2024-12-06 11:29:12.766908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.376 [2024-12-06 11:29:12.766943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.376 [2024-12-06 11:29:12.766949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.376 [2024-12-06 11:29:12.766955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.376 [2024-12-06 11:29:12.766959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.376 [2024-12-06 11:29:12.768372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:40.376 [2024-12-06 11:29:12.768489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:40.376 [2024-12-06 11:29:12.768599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:40.376 [2024-12-06 11:29:12.768600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.636 Malloc0 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.636 [2024-12-06 11:29:13.547081] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.636 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.896 [2024-12-06 11:29:13.575821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1888374 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:40.896 11:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.812 11:29:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1888225 00:27:42.812 11:29:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Write completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Write completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Write completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Write completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Read completed with error (sct=0, sc=8) 00:27:42.812 starting I/O failed 00:27:42.812 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 [2024-12-06 11:29:15.607487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 [2024-12-06 11:29:15.607670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 [2024-12-06 11:29:15.607848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Read completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.813 starting I/O failed 00:27:42.813 Write completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Write completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Read completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Write completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Read completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Read completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Read completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Read completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Write completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Write completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 Read completed with error (sct=0, sc=8) 00:27:42.814 starting I/O failed 00:27:42.814 [2024-12-06 11:29:15.608032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.814 [2024-12-06 11:29:15.608244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.608264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.608431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.608441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.608599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.608609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.608763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.608772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.608849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.608857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.609010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.609019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.609106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.609115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.609344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.609354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.609505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.609514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.609651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.609663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.609739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.609747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.609837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.609846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.609987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.609997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.610148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.610159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.610249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.610257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.610399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.610409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.610665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.610697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.610802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.610832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.610971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.611002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.611224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.611259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.611432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.611474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.611741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.611767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.611950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.611975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.612080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.612106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.612281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.612307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.612504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.612528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.612743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.612775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.612878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.612909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.613097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.613129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.814 qpair failed and we were unable to recover it. 00:27:42.814 [2024-12-06 11:29:15.613308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-12-06 11:29:15.613340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.613613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.613637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.613811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.613835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.614020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.614051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.614199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.614231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.614412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.614444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.614635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.614660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.614825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.614850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.614945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.614969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.615171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.615197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.615298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.615323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.615425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.615448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.615678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.615702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.615865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.615890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.616126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.616159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.616269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.616300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.616483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.616514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.616730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.616762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.616863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.616905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.616995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.617018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.617188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.617219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.617401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.617426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.617587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.617611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.617793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.617817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.617933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.617958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.618076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.618102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.618208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.618232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.618328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.618352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.618512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.618535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.618630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.618655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.618756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.618780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.618888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.618912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.619101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.619127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.619417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-12-06 11:29:15.619442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.815 qpair failed and we were unable to recover it. 00:27:42.815 [2024-12-06 11:29:15.619613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.619637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.619726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.619750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.619943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.619967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.620087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.620114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.620218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.620242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.620470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.620494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.620679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.620702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.620808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.620833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.621004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.621028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.621199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.621224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.621384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.621409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.621607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.621636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.621816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.621845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.622098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.622128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.622392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.622421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.622589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.622618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.622853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.622882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.623083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.623112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.623281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.623310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.623490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.623519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.623732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.623762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.624048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.624106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.624309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.624341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.624552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.624584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.624716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.624747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.624922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.624953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.625139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.625177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.625355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.625386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.625576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.625608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.625882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.625914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.626089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.626122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.626227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.626258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.626448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.626480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.626725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-12-06 11:29:15.626756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.816 qpair failed and we were unable to recover it. 00:27:42.816 [2024-12-06 11:29:15.626967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.626996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.627112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.627142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.627355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.627384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.627498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.627528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.627719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.627750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.627942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.627974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.628166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.628200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.628460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.628489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.628661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.628690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.628965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.628995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.629105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.629134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.629301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.629330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.629448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.629478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.629636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.629665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.629958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.629987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.630167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.630197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.630323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.630351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.630520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.630549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.630794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.630824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.630997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.631026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.631287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.631316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.631443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.631476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.631664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.631695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.631880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.631911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.632175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.632207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.632419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.632450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.632560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.632592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.632840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.632875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.633008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.633042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.633179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.633212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.633330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.817 [2024-12-06 11:29:15.633362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.817 qpair failed and we were unable to recover it. 00:27:42.817 [2024-12-06 11:29:15.633577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.633609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.633891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.633930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.634180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.634214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.634486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.634518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.634703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.634735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.634863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.634896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.635028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.635079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.635235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.635268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.635542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.635575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.635874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.635907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.636034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.636075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.636182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.636214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.636462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.636495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.636699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.636731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.636922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.636954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.637115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.637148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.637275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.637306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.637488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.637521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.637714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.637747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.637956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.637989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.638193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.638227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.638418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.638450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.638551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.638584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.638776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.638808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.639017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.639049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.639190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.639223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.639327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.639360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.639552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.639585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.639863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.639896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.640077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.640111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.640239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.818 [2024-12-06 11:29:15.640271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.818 qpair failed and we were unable to recover it. 00:27:42.818 [2024-12-06 11:29:15.640389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.640421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.640627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.640658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.640763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.640795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.640899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.640931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.641150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.641184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.641380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.641412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.641541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.641572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.641690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.641721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.641940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.641972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.642087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.642120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.642295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.642334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.642460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.642493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.642671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.642704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.642813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.642845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.642960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.642993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.643130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.643163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.643355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.643387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.643495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.643527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.643636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.643668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.643893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.643925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.644039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.644084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.644300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.644333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.644531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.644564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.644863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.644896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.645088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.645122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.645396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.645428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.645683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.645715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.645949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.645980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.646253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.646286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.646473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.646505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.646607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.646640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.646831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.646862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.647113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.647146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.819 qpair failed and we were unable to recover it. 00:27:42.819 [2024-12-06 11:29:15.647317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-12-06 11:29:15.647349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.647478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.647509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.647687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.647720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.647990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.648022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.648161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.648198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.648373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.648405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.648576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.648607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.648721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.648752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.648887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.648919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.649165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.649198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.649381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.649414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.649518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.649549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.649776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.649809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.649999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.650033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.650178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.650211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.650340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.650372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.650562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.650595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.650813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.650852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.651039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.651098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.651223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.651257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.651506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.651540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.651649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.651681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.651857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.651889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.652016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.652049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.652241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.652273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.652578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.652611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.652803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.652836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.652953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.652984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.653092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.653125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.653335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.653368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.653479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.653510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.653820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.653852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.654041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.654082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.654295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.820 [2024-12-06 11:29:15.654328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.820 qpair failed and we were unable to recover it. 00:27:42.820 [2024-12-06 11:29:15.654501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.654532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.654660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.654692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.654869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.654902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.655027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.655067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.655285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.655317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.655439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.655471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.655645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.655679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.655900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.655933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.656135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.656168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.656343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.656375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.656571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.656603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.656779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.656810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.657052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.657096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.657210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.657242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.657469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.657501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.657675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.657706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.657812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.657843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.658012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.658046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.658197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.658231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.658474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.658507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.658610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.658642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.658754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.658786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.658901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.658933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.659052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.659123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.659325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.659358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.659536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.659568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.659686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.659717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.659906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.659937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.660153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.660187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.660368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.660400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.660506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.660537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.660649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.660680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.660864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.660896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.821 qpair failed and we were unable to recover it. 00:27:42.821 [2024-12-06 11:29:15.661083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-12-06 11:29:15.661116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.661392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.661424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.661638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.661670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.661864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.661895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.662116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.662149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.662253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.662285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.662428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.662460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.662634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.662667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.662789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.662821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.662921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.662952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.663153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.663187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.663387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.663420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.663608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.663640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.663764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.663796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.664037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.664083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.664257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.664290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.664403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.664438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.664805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.664882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.665119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.665159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.665412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.665446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.665549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.665582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.665792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.665825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.666020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.666053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.666264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.666296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.666469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.666500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.666749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.666782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.666891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.666921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.667037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.667079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.667282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.667314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.667508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.667540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.667759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.667791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.668071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.668105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.668319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.668350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.668467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.822 [2024-12-06 11:29:15.668499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.822 qpair failed and we were unable to recover it. 00:27:42.822 [2024-12-06 11:29:15.668684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.668715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.668974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.669007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.669218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.669251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.669420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.669451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.669670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.669702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.669900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.669932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.670177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.670211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.670344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.670376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.670616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.670648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.670827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.670858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.671045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.671093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.671267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.671299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.671545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.671576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.671842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.671875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.672047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.672089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.672285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.672316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.672506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.672538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.672825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.672857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.673102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.673136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.673327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.673358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.673643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.673675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.673776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.673808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.673942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.673973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.674144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.674177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.674456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.674488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.674617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.674648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.674918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.674949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.675151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.675185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.675378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.675410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.675531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.823 [2024-12-06 11:29:15.675563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.823 qpair failed and we were unable to recover it. 00:27:42.823 [2024-12-06 11:29:15.675737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.675769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.675898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.675930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.676206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.676239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.676437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.676469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.676658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.676690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.676873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.676904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.677131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.677164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.677364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.677401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.677590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.677622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.677795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.677827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.677943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.677974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.678157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.678190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.678436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.678467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.678659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.678691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.678896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.678927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.679111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.679145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.679382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.679414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.679614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.679645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.679838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.679870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.680143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.680177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.680447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.680478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.680600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.680632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.680845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.680876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.681046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.681086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.681392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.681423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.681695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.681727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.681913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.681944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.682116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.682148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.682427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.682459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.682641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.682673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.682863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.682894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.683116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.683148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.683320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.683351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.683541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.824 [2024-12-06 11:29:15.683573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.824 qpair failed and we were unable to recover it. 00:27:42.824 [2024-12-06 11:29:15.683752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.683784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.683907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.683938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.684132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.684165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.684359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.684390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.684502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.684534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.684719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.684751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.684863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.684894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.685071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.685104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.685275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.685306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.685441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.685472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.685604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.685636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.685910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.685942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.686144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.686177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.686314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.686346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.686557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.686601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.686790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.686821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.687023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.687055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.687282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.687314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.687583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.687802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.687834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.688079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.688112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.688311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.688343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.688552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.688584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.688757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.688788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.688919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.688950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.689053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.689092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.689296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.689328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.689539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.689570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.689698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.689731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.689907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.689938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.690129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.690162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.690354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.690386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.690489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.690520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.690645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.825 [2024-12-06 11:29:15.690676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.825 qpair failed and we were unable to recover it. 00:27:42.825 [2024-12-06 11:29:15.690866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.690898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.691095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.691128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.691370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.691405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.691651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.691683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.691952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.691984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.692200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.692232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.692475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.692508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.692691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.692729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.692926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.692957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.693157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.693189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.693387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.693420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.693556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.693587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.693772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.693804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.693973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.694006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.694288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.694320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.694436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.694467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.694718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.694749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.694863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.694895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.695182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.695216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.695476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.695508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.695714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.695747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.695995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.696028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.696286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.696319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.696502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.696534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.696705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.696738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.696934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.696965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.697089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.697122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.697297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.697330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.697589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.697621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.697868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.697900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.698005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.698037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.698169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.698202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.698386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.698417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.698687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.826 [2024-12-06 11:29:15.698718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.826 qpair failed and we were unable to recover it. 00:27:42.826 [2024-12-06 11:29:15.698922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.698954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.699137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.699169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.699432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.699464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.699659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.699691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.699934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.699966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.700165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.700197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.700416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.700448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.700624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.700656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.700902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.700934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.701223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.701256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.701543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.701576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.701753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.701784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.701998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.702030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.702156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.702189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.702382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.702419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.702597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.702629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.702809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.702842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.703025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.703057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.703250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.703282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.703454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.703486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.703658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.703690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.703933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.703965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.704176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.704209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.704315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.704347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.704538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.704570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.704811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.704844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.705014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.705046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.705264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.705297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.705573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.705605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.705815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.705847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.706055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.706097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.706225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.706257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.706431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.706463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.706647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-12-06 11:29:15.706679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.827 qpair failed and we were unable to recover it. 00:27:42.827 [2024-12-06 11:29:15.706806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.706838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.707104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.707138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.707318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.707350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.707563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.707595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.707862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.707894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.708018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.708050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.708311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.708342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.708483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.708515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.708640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.708672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.708888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.708920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.709166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.709199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.709305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.709337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.709522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.709553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.709748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.709780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.709954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.709986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.710199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.710233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.710461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.710495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.710674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.710705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.710892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.710924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.711065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.711098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.711260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.711496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.711567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.711862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.711899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.712094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.712128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.712346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.712379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.712488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.712520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.712628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.712659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.712857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.712889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.713087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.713120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.713367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.713399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.828 [2024-12-06 11:29:15.713524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.828 [2024-12-06 11:29:15.713555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.828 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.713801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.713833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.714030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.714074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.714207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.714238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.714410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.714450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.714656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.714688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.714805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.714838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.715009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.715040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.715188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.715218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.715393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.715426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.715611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.715643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.715827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.715860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.715982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.716013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.716209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.716242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.716419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.716452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.716720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.716753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.716928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.716959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.717221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.717255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.717575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.717607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.717798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.717830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.718020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.718052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.718196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.718228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.718386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.718418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.718619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.718650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.718842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.718875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.719156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.719189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.719438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.719470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.719601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.719634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.719836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.719868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.720077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.720109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.720331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.720364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.720492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.720529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.720772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.720804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.829 [2024-12-06 11:29:15.720924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.829 [2024-12-06 11:29:15.720956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.829 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.721164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.721199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.721326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.721358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.721536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.721568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.721812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.721844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.722052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.722093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.722282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.722313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.722570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.722601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.722776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.722808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.723087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.723121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.723362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.723394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.723664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.723696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.723820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.723853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.724119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.724151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.724400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.724431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.724638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.724670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.724839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.724870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.725002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.725033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.725357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.725390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.725564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.725596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.725866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.725897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.726210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.726242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.726511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.726542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.726676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.726708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.726894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.726926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.727065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.727099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.727209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.727240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.727509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.727541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.727815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.727848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.728044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.728089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.728286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.728318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.728447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.728478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.728648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.728680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.728786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.830 [2024-12-06 11:29:15.728817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.830 qpair failed and we were unable to recover it. 00:27:42.830 [2024-12-06 11:29:15.729001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.729033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.729235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.729268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.729390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.729422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.729619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.729651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.729898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.729937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.730055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.730099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.730278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.730311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.730506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.730539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.730778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.730810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.730990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.731022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.731163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.731197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.731467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.731499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.731776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.731808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.731940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.731973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.732174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.732207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.732514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.732546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.732790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.732826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.733015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.733048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.733183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.733216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.733329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.733361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:42.831 [2024-12-06 11:29:15.733530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-12-06 11:29:15.733563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:42.831 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.733766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.733797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.734043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.734086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.734206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.734238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.734488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.734520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.734764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.734795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.734981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.735013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.735219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.735251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.735367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.735399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.735518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.735550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.735744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.735775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.735890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.735922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.736051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.736096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.736319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.736352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.736540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.111 [2024-12-06 11:29:15.736572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.111 qpair failed and we were unable to recover it. 00:27:43.111 [2024-12-06 11:29:15.736740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.736771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.737085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.737207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.737239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.737359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.737391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.737604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.737636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.737852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.737884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.737996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.738029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.738178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.738211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.738455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.738487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.738603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.738640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.738758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.738790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.738984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.739016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.739136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.739169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.739439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.739471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.739680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.739712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.739886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.739918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.740021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.740054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.740242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.740274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.740444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.740476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.740658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.740690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.740982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.741015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.741207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.741240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.741507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.741538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.741740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.741774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.742050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.742092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.742216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.742247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.742455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.742487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.742671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.742703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.742888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.742921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.743041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.743083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.743270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.743302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.743476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.743508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.743808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.743840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.744052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.744095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.744341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.744373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.744566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.744598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.744737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.744769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.744968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.745001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.745222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.745255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.745369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.745401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.745501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.112 [2024-12-06 11:29:15.745533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.112 qpair failed and we were unable to recover it. 00:27:43.112 [2024-12-06 11:29:15.745728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.745760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.746042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.746096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.746209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.746241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.746424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.746456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.746729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.746761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.747030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.747073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.747292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.747324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.747495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.747527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.747711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.747749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.747937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.747969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.748164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.748197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.748443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.748476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.748658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.748691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.748958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.748991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.749175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.749208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.749411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.749444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.749568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.749599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.749768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.749801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.749985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.750017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.750180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.750214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.750345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.750376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.750549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.750581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.750829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.750861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.750971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.751004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.751256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.751289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.751406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.751438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.751612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.751644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.751912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.751942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.752046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.752088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.752276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.752306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.752502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.752533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.752746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.752776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.752995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.753025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.753223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.753254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.753374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.753404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.753548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.753579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.753779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.753809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.753932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.753963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.754150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.754182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.754353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.754384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.754555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.754585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.754713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.113 [2024-12-06 11:29:15.754744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.113 qpair failed and we were unable to recover it. 00:27:43.113 [2024-12-06 11:29:15.754929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.754959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.755232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.755263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.755513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.755545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.755843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.755873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.756066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.756098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.756273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.756305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.756495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.756531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.756803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.756833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.757083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.757115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.757326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.757357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.757486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.757517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.757785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.757816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.757991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.758021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.758245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.758278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.758466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.758498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.758685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.758717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.759015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.759047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.759194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.759226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.759357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.759389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.759572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.759604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.759790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.759822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.759993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.760025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.760220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.760255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.760446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.760478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.760669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.760702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.760995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.761028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.761241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.761275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.761488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.761521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.761710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.761742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.761930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.761962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.762079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.762113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.762248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.762288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.762488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.762520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.762825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.762858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.762971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.763001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.763198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.763239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.763427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.763459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.763720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.763753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.763926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.763958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.764142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.764174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.764309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.764341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.114 qpair failed and we were unable to recover it. 00:27:43.114 [2024-12-06 11:29:15.764522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.114 [2024-12-06 11:29:15.764554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.764746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.764778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.764963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.764994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.765183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.765217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.765330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.765361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.765472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.765509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.765690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.765722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.765908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.765940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.766142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.766175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.766369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.766402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.766530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.766562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.766773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.766805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.767000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.767031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.767223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.767257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.767476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.767507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.767711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.767742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.767856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.767889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.768024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.768056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.768284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.768316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.768512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.768545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.768777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.768807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.768935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.768967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.769238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.769272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.769453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.769486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.769657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.769689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.769889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.769921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.770047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.770088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.770205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.770237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.770468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.770499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.770615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.770647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.770829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.770860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.771078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.771110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.771307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.771339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.771514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.771546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.771677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.771709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.115 [2024-12-06 11:29:15.771893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.115 [2024-12-06 11:29:15.771926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.115 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.772057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.772105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.772225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.772255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.772429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.772461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.772649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.772681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.772853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.772884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.773080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.773114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.773336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.773368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.773595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.773627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.773829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.773860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.774084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.774123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.774395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.774429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.774612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.774644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.774766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.774797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.774928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.774959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.775132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.775167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.775448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.775480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.775686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.775717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.775956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.775988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.776223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.776256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.776443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.776476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.776610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.776642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.776833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.776866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.777044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.777089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.777298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.777331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.777455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.777486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.777688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.777721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.777896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.777928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.778119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.778152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.778271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.778303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.778426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.778459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.778569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.778602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.778774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.778806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.778992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.779024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.779228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.779262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.779446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.779478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.779602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.779633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.779756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.779788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.780035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.780076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.780220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.780250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.780400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.780432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.780684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.780716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.116 qpair failed and we were unable to recover it. 00:27:43.116 [2024-12-06 11:29:15.780820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.116 [2024-12-06 11:29:15.780853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.780967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.780998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.781302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.781336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.781581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.781613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.781890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.781922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.782113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.782146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.782257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.782289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.782490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.782522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.782636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.782673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.782779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.782812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.782925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.782957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.783226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.783258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.783411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.783444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.783628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.783660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.783859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.783892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.784213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.784248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.784431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.784463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.784570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.784602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.784851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.784884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.785127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.785159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.785462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.785494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.785729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.785762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.785889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.785921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.786108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.786141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.786353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.786386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.786500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.786532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.786724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.786757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.787078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.787111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.787295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.787328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.787537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.787569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.787673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.787703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.787968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.787999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.788267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.788299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.788480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.788511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.788695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.788727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.788856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.788888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.789071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.789105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.789280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.789311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.789484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.789516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.789643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.789674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.789967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.790000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.117 [2024-12-06 11:29:15.790199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.117 [2024-12-06 11:29:15.790232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.117 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.790505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.790538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.790718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.790749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.791008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.791039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.791252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.791285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.791534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.791565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.791682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.791714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.791826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.791863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.792107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.792140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.792333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.792366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.792486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.792518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.792777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.792809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.792931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.792962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.793139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.793171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.793283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.793316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.793554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.793587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.793800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.793832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.793940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.793972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.794249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.794282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.794478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.794511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.794613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.794645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.794756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.794788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.794903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.794935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.795176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.795208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.795344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.795377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.795620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.795652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.795765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.795797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.795999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.796031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.796259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.796292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.796471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.796504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.796679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.796711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.796900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.796931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.797051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.797097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.797268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.797299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.797526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.797610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.797749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.797786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.797919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.797952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.798091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.798124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.798230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.798262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.798372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.798404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.798525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.118 [2024-12-06 11:29:15.798556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.118 qpair failed and we were unable to recover it. 00:27:43.118 [2024-12-06 11:29:15.798764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.798796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.798934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.798965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.799213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.799245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.799437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.799469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.799764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.799796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.799978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.800009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.800264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.800305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.800488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.800520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.800654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.800686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.800859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.800890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.801187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.801220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.801341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.801374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.801489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.801520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.801727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.801758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.801876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.801909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.802020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.802050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.802263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.802296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.802497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.802529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.802722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.802753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.802859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.802892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.803030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.803068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.803248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.803279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.803514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.803546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.803659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.803691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.803908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.803939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.804114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.804146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.804319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.804348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.804539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.804572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.804755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.804786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.804892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.804923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.805189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.805220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.805427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.805459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.805563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.805593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.805899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.805933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.806047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.806091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.806280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.806312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.806503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.806535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.806636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.806666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.806783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.806815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.806990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.807021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.119 qpair failed and we were unable to recover it. 00:27:43.119 [2024-12-06 11:29:15.807140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.119 [2024-12-06 11:29:15.807174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.807301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.807331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.807438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.807470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.807580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.807610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.807740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.807771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.807874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.807906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.808009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.808045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.808315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.808348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.808602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.808634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.808819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.808851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.809023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.809055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.809189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.809222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.809405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.809435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.809628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.809659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.809792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.809824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.810099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.810132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.810318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.810350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.810466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.810496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.810763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.810796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.810999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.811031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.811230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.811262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.811397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.811428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.811552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.811584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.811826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.811858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.812079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.812112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.812237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.812269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.812540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.812571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.812861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.812893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.813100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.813133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.813250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.813281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.813584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.813616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.813739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.813770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.813944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.813975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.814158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.814192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.814325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.814357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.814536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.814568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.814840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.814872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.815007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.815038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.815180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.120 [2024-12-06 11:29:15.815212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.120 qpair failed and we were unable to recover it. 00:27:43.120 [2024-12-06 11:29:15.815493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.815525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.815643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.815673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.815850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.815881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.816075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.816108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.816355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.816389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.816577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.816609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.816791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.816823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.816941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.816978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.817118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.817150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.817257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.817288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.817399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.817432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.817635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.817667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.817840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.817872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.818042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.818211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.818483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.818516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.818709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.818741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.818920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.818952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.819124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.819158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.819454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.819486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.819602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.819634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.819752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.819784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.819924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.819957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.820071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.820104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.820391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.820423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.820603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.820635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.820808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.820841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.821024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.821057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.821291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.821323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.821438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.821471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.821646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.821678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.821944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.821976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.822168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.822200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.822377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.822410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.822596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.822628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.822817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.822850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.822973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.823007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.823189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.823221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.823463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.823496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.823704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.823736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.823921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.823953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.824092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.824126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.824304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.824336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.824517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.824549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.121 [2024-12-06 11:29:15.824661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.121 [2024-12-06 11:29:15.824693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.121 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.824881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.824914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.825100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.825134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.825240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.825273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.825396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.825434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.825614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.825646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.825770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.825802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.825909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.825941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.826132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.826166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.826358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.826391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.826503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.826534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.826723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.826755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.827040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.827095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.827295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.827328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.827618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.827650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.827893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.827925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.828114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.828146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.828339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.828370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.828626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.828658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.828780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.828812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.828986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.829017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.829302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.829335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.829454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.829485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.829658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.829691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.829812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.829844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.829960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.829992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.830186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.830218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.830462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.830494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.830692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.830724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.830919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.830950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.831166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.831200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.831397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.831434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.831619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.831652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.831822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.831853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.831969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.832000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.832117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.832149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.832331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.832362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.832616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.832647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.832841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.832872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.832977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.833008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.833226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.833260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.833443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.833475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.833657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.122 [2024-12-06 11:29:15.833689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.122 qpair failed and we were unable to recover it. 00:27:43.122 [2024-12-06 11:29:15.833821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.833852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.833976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.834008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.834200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.834232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.834409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.834441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.834707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.834741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.834856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.834887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.835100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.835135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.835313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.835345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.835471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.835503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.835612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.835644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.835838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.835870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.836067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.836101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.836222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.836254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.836437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.836468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.836571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.836601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.836784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.836817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.837110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.837142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.837253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.837283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.837491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.837522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.837734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.837765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.837874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.837904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.838028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.838088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.838296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.838328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.838509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.838540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.838778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.838808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.838990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.839020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.839210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.839242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.839369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.839401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.839584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.839622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.839863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.839895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.840013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.840043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.840262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.840293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.840415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.840447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.840636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.840669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.840941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.840973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.841160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.841194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.841315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.841347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.841520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.841551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.841720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.841753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.842024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.842056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.842257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.842289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.123 [2024-12-06 11:29:15.842407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-12-06 11:29:15.842439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.123 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.842634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.842665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.842842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.842874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.843048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.843088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.843198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.843228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.843502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.843532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.843716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.843747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.843999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.844030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.844215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.844246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.844350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.844382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.844564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.844596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.844768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.844800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.844986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.845016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.845231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.845265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.845462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.845494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.845680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.845710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.845823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.845853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.846032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.846081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.846212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.846241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.846358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.846388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.846511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.846542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.846759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.846791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.846981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.847012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.847133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.847167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.847378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.847408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.847591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.847622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.847736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.847766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.848013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.848052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.848185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.848217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.848492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.848525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.848658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.848689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.848900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.848932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.849108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.849141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.849261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.849292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.849402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.849432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.849544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.849574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.849821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.849853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.124 [2024-12-06 11:29:15.850146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.124 [2024-12-06 11:29:15.850178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.124 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.850292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.850323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.850568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.850601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.850792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.850824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.851018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.851049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.851179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.851211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.851317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.851350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.851588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.851621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.851809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.851840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.851963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.851994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.852235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.852268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.852463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.852494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.852678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.852710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.852838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.852870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.852979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.853010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.853224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.853257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.853375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.853407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.853627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.853660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.853853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.853885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.854006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.854037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.854309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.854342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.854513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.854543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.854730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.854763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.855035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.855077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.855206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.855239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.855412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.855444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.855696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.855729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.855999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.856032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.856175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.856206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.856428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.856461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.856655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.856691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.856876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.856906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.857098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.857130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.857334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.857366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.857611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.857643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.857752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.857782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.858045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.858095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.858321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.858353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.858555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.858587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.858692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.858722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.858902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.858934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.859180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.859213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.859391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.859423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.125 [2024-12-06 11:29:15.859641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.125 [2024-12-06 11:29:15.859673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.125 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.859801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.859833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.860025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.860056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.860238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.860268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.860370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.860400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.860519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.860550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.860759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.860789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.860979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.861010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.861196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.861231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.861362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.861394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.861590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.861623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.861893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.861925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.862035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.862078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.862186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.862219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.862468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.862501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.862790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.862821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.863050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.863094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.863280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.863312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.863498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.863530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.863651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.863681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.863802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.863833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.864147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.864179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.864359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.864392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.864568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.864600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.864784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.864815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.864992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.865022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.865217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.865248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.865374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.865410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.865593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.865625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.865818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.865849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.866067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.866100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.866313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.866345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.866521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.866553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.866676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.866706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.866819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.866851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.867075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.867109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.867219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.867248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.867462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.867493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.867740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.867771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.868021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.868054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.868192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.868223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.868445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.868477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.126 qpair failed and we were unable to recover it. 00:27:43.126 [2024-12-06 11:29:15.868665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.126 [2024-12-06 11:29:15.868697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.868894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.868926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.869038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.869079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.869191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.869223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.869490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.869522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.869694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.869727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.869915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.869946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.870068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.870100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.870368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.870400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.870647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.870679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.870796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.870827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.870941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.870972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.871177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.871212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.871338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.871368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.871647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.871679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.871799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.871830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.871956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.871986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.872184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.872216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.872421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.872452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.872578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.872609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.872794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.872826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.872934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.872966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.873078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.873110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.873211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.873241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.873367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.873399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.873510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.873547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.873807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.873839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.874012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.874044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.874305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.874338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.874583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.874615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.874864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.874897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.875082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.875116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.875330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.875362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.875473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.875503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.875747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.875778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.875898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.875929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.876190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.876223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.876403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.876434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.876722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.876754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.876957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.876989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.877246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.877278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.877494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.127 [2024-12-06 11:29:15.877526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.127 qpair failed and we were unable to recover it. 00:27:43.127 [2024-12-06 11:29:15.877713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.877745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.877928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.877960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.878134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.878166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.878291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.878321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.878431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.878463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.878738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.878770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.878956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.878989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.879173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.879206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.879340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.879371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.879564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.879594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.879720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.879753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.880017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.880048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.880235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.880267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.880464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.880495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.880693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.880726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.880894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.880924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.881108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.881140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.881326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.881359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.881486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.881517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.881641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.881673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.881800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.881833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.882007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.882039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.882322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.882354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.882483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.882518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.882627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.882658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.882784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.882816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.882997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.883027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.883147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.883178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.883366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.883398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.883574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.883606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.883791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.883824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.883990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.884021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.884332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.884365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.884610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.884642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.884907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.884940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.128 qpair failed and we were unable to recover it. 00:27:43.128 [2024-12-06 11:29:15.885052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.128 [2024-12-06 11:29:15.885094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.885211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.885242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.885447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.885480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.885690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.885722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.885938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.885970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.886142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.886174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.886296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.886326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.886446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.886478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.886603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.886633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.886886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.886918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.887125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.887157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.887351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.887381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.887500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.887531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.887718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.887750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.887924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.887955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.888172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.888203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.888408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.888439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.888545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.888575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.888781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.888812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.889067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.889100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.889365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.889397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.889503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.889535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.889650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.889681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.889938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.889970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.890105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.890139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.890354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.890385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.890514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.890544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.890744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.890775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.891013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.891050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.891166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.891200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.891471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.891502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.891621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.891653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.891822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.891853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.892035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.892073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.892256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.892286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.892556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.892587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.892773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.892804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.893024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.893056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.893249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.893280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.893462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.893493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.129 [2024-12-06 11:29:15.893674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.129 [2024-12-06 11:29:15.893706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.129 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.893825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.893855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.893995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.894029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.894239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2e540 is same with the state(6) to be set 00:27:43.130 [2024-12-06 11:29:15.894604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.894674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.894889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.894925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.895047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.895094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.895305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.895337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.895533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.895565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.895692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.895725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.895896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.895927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.896129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.896163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.896349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.896381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.896658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.896691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.896827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.896860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.896973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.897005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.897204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.897238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.897533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.897565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.897695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.897727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.897850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.897881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.898179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.898212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.898459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.898491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.898615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.898647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.898914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.898946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.899101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.899134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.899347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.899379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.899625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.899657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.899862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.899893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.900093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.900126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.900231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.900263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.900507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.900539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.900712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.900744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.900955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.900987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.901182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.901214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.901315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.901347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.901456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.901488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.901663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.901697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.901908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.901941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.902056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.902104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.902226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.902258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.902434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.902467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.902586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.902618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.902887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.902925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-12-06 11:29:15.903173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.130 [2024-12-06 11:29:15.903206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.903382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.903415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.903520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.903552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.903683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.903715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.903815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.903846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.904122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.904156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.904417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.904449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.904562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.904594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.904810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.904842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.904949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.904980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.905151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.905184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.905394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.905426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.905542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.905575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.905733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.905767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.905886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.905918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.906093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.906125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.906239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.906272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.906542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.906575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.906774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.906807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.906977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.907009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.907208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.907243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.907354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.907385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.907583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.907616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.907858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.907889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.908080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.908113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.908241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.908273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.908451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.908483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.908588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.908619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.908899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.908932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.909056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.909097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.909365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.909398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.909642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.909673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.909774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.909806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.909925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.909957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.910280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.910313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.910557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.910590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.910703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.910735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.910858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.910890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.911134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.911167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.911300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.911338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.911441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.911472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.911586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.911617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-12-06 11:29:15.911805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.131 [2024-12-06 11:29:15.911837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.912096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.912129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.912343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.912375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.912571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.912603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.912798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.912830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.912948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.912979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.913244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.913282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.913472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.913504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.913751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.913781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.913988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.914019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.914212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.914243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.914353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.914385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.914495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.914526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.914697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.914730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.914899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.914931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.915056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.915096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.915214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.915243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.915459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.915492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.915597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.915628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.915735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.915765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.915954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.915985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.916255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.916288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.916483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.916515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.916703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.916736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.916952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.916984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.917121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.917153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.917275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.917305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.917414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.917446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.917568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.917599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.917781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.917813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.918106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.918140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.918318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.918351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.918487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.918519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.918646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.918677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.918937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.918968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.919075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.919108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.919305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.919337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-12-06 11:29:15.919607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.132 [2024-12-06 11:29:15.919644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.919897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.919929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.920210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.920243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.920456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.920488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.920787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.920819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.921094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.921141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.921286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.921332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.921490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.921536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.921756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.921807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.922115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.922165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.922370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.922416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.922564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.922605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.922823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.922855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.923026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.923081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.923215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.923247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.923376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.923407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.923521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.923552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.923762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.923792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.923982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.924013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.924130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.924163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.924345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.924377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.924649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.924681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.924853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.924885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.924991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.925022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.925168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.925201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.925451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.925484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.925597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.925629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.925770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.925802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.926028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.926071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.926341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.926374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.926492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.926523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.926649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.926681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.926950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.926981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.927201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.927233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.927365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.927397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.927582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.927613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.927882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.927914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.928197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.928230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.928338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.928370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.928501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.928531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.928761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.928804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.928974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.929006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.929212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.133 [2024-12-06 11:29:15.929245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.133 [2024-12-06 11:29:15.929424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.929457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.929671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.929703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.929874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.929906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.930029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.930067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.930184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.930214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.930421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.930453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.930667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.930699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.930818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.930850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.931068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.931099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.931375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.931408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.931589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.931619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.931838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.931869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.932122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.932153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.932421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.932455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.932655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.932685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.932933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.932972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.933086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.933117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.933249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.933279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.933458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.933490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.933672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.933701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.933818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.933849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.934069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.934104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.934376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.934408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.934539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.934569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.934744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.934778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.934980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.935010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.935200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.935231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.935356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.935386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.935519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.935551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.935738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.935768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.935959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.935989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.936164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.936197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.936387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.936420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.936632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.936663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.936871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.936903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.937167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.937200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.937400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.937432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.937617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.937657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.937792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.937823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.937999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.938029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.938255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.938288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.938498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.938529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.134 qpair failed and we were unable to recover it. 00:27:43.134 [2024-12-06 11:29:15.938730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.134 [2024-12-06 11:29:15.938761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.938947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.938979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.939268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.939301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.939499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.939531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.939706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.939738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.939859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.939889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.940099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.940131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.940375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.940404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.940589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.940619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.940747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.940777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.940981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.941013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.941267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.941300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.941484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.941515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.941699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.941731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.941911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.941943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.942113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.942145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.942387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.942420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.942664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.942695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.942921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.942953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.943094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.943126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.943242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.943272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.943539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.943570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.943857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.943890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.944029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.944070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.944260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.944290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.944509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.944539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.944726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.944756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.944941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.944973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.945141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.945173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.945352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.945384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.945501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.945532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.945712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.945745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.945849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.945880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.946179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.946210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.946419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.946449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.946553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.946588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.946856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.946887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.947118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.947149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.947252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.947283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.947472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.947504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.947749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.947780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.947952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.135 [2024-12-06 11:29:15.947985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.135 qpair failed and we were unable to recover it. 00:27:43.135 [2024-12-06 11:29:15.948183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.948217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.948417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.948449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.948633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.948663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.948906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.948936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.949138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.949170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.949362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.949393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.949519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.949552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.949828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.949861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.950074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.950107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.950283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.950315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.950574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.950605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.950876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.950908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.951188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.951220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.951409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.951441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.951563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.951595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.951723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.951754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.951951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.951981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.952254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.952286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.952417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.952449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.952566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.952597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.952726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.952758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.952870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.952902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.953111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.953143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.953329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.953360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.953633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.953665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.953840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.953871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.954056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.954097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.954234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.954266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.954437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.954468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.954574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.954605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.954909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.954940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.955119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.955151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.955277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.955310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.955585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.955622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.955862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.955894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.956022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.956053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.956275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.956308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.956420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.136 [2024-12-06 11:29:15.956450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.136 qpair failed and we were unable to recover it. 00:27:43.136 [2024-12-06 11:29:15.956622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.956655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.956759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.956789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.956920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.956951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.957248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.957281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.957402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.957434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.957606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.957638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.957820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.957851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.958032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.958070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.958246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.958277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.958473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.958505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.958685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.958717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.958986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.959017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.959156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.959188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.959378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.959408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.959533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.959564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.959753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.959784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.959923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.959955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.960136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.960167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.960350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.960380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.960595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.960626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.960753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.960785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.960897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.960928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.961114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.961146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.961266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.961298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.961569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.961602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.961775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.961806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.962046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.962087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.962261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.962294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.962412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.962443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.962712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.962746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.962959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.962990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.963258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.963291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.963409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.963440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.963705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.963737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.963991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.964023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.964314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.964354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.964566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.964597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.964864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.964896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.965082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.965115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.965334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.965366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.965587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.965619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.965817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.965849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.966116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.137 [2024-12-06 11:29:15.966149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.137 qpair failed and we were unable to recover it. 00:27:43.137 [2024-12-06 11:29:15.966328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.966361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.966602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.966635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.966821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.966853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.967071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.967104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.967317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.967350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.967459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.967489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.967613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.967645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.967838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.967870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.967989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.968020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.968314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.968347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.968470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.968502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.968713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.968744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.968866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.968898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.969082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.969113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.969293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.969324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.969432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.969464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.969648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.969679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.969791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.969822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.970093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.970140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.970352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.970384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.970583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.970615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.970798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.970829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.971095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.971128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.971398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.971430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.971707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.971739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.971933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.971966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.972178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.972211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.972329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.972361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.972496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.972528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.972700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.972730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.972934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.972967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.973153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.973187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.973406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.973449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.973567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.973598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.973781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.973812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.973946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.973976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.974159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.974192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.974460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.974492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.974607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.974637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.974764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.974795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.974916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.138 [2024-12-06 11:29:15.974948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.138 qpair failed and we were unable to recover it. 00:27:43.138 [2024-12-06 11:29:15.975219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.975251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.975457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.975489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.975688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.975720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.975829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.975859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.975979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.976011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.976214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.976247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.976499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.976531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.976773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.976805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.977042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.977084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.977216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.977249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.977430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.977460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.977572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.977603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.977795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.977826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.977934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.977965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.978138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.978170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.978414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.978446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.978629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.978660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.978778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.978810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.979001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.979031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.979161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.979192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.979459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.979490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.979604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.979636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.979813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.979845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.980023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.980054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.980315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.980347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.980530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.980561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.980679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.980710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.980839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.980872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.981167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.981199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.981471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.981503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.981620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.981651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.981785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.981823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.981948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.981978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.982254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.982289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.982504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.982537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.982643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.982674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.982846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.982878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.983096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.983127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.983259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.983291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.983564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.983597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.983783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.139 [2024-12-06 11:29:15.983814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.139 qpair failed and we were unable to recover it. 00:27:43.139 [2024-12-06 11:29:15.984112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.984145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.984315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.984346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.984559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.984592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.984773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.984804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.985052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.985096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.985355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.985386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.985559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.985590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.985798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.985830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.986028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.986070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.986196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.986227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.986412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.986443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.986571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.986603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.986868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.986901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.987087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.987120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.987289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.987322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.987491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.987522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.987630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.987662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.987912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.987945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.988136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.988169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.988445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.988477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.988597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.988629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.988729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.988761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.989002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.989034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.989246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.989278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.989497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.989528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.989645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.989677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.989796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.989826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.990010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.990043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.990221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.990252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.990438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.990470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.990711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.990749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.990863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.990893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.991072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.991105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.991280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.991311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.991425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.991455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.991573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.991604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.991894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.991927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.992096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.992130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.140 qpair failed and we were unable to recover it. 00:27:43.140 [2024-12-06 11:29:15.992324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.140 [2024-12-06 11:29:15.992357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.992531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.992561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.992803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.992835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.992954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.992985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.993113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.993145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.993256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.993286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.993408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.993439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.993544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.993576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.993682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.993712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.993981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.994014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.994265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.994297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.994492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.994524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.994643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.994675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.994862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.994894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.995086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.995117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.995322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.995354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.995597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.995627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.995732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.995762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.995879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.995909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.996089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.996161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.996451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.996485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.996667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.996699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.996915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.996947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.997237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.997269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.997452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.997484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.997693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.997725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.997839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.997870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.998156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.998188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.998461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.998493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.998680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.998711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.998917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.998948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.999226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.999259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.999453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.999484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.999683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.999715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.999841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:15.999872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:15.999998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:16.000029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:16.000327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:16.000411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:16.000614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:16.000650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:16.000924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:16.000956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:16.001133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:16.001165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:16.001298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:16.001330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:16.001512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:16.001543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:16.001736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:16.001767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.141 qpair failed and we were unable to recover it. 00:27:43.141 [2024-12-06 11:29:16.001893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.141 [2024-12-06 11:29:16.001924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.002171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.002203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.002382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.002415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.002592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.002625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.002810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.002842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.002956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.002985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.003179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.003212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.003332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.003364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.003634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.003665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.003837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.003870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.004037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.004078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.004323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.004354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.004543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.004574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.004842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.004874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.005045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.005097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.005313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.005343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.005585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.005623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.005899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.005930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.006114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.006147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.006333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.006365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.006549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.006581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.006752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.006785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.006968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.007000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.007126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.007159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.007359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.007389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.007575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.007607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.007797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.007831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.008093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.008126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.008404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.008436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.008565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.008598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.008787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.008819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.009114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.009147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.009345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.009377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.009644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.009675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.009803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.009834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.010119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.010152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.010359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.010392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.010578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.010609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.010794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.010826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.010997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.011028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.011278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.011311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.011557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.011589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.011695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.011727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.012005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.142 [2024-12-06 11:29:16.012038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.142 qpair failed and we were unable to recover it. 00:27:43.142 [2024-12-06 11:29:16.012221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.012254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.012372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.012403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.012517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.012548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.012734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.012766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.012891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.012922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.013200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.013233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.013336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.013368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.013614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.013646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.013843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.013875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.014146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.014180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.014313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.014345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.014518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.014550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.014749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.014786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.014964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.014996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.015123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.015156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.015346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.015378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.015674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.015707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.015947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.015979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.016158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.016192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.016392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.016424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.016641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.016674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.016856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.016888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.017078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.017112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.017390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.017423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.017554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.017586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.017701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.017734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.017953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.017986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.018258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.018290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.018562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.018594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.018699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.018731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.018920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.018952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.019075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.019108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.019242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.019274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.019567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.019600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.019845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.019877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.020006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.020038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.020179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.020212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.020460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.020491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.020755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.020788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.020971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.143 [2024-12-06 11:29:16.021004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.143 qpair failed and we were unable to recover it. 00:27:43.143 [2024-12-06 11:29:16.021281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.021315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.021484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.021516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.021807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.021839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.022119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.022153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.022359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.022391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.022641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.022673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.022863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.022896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.023077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.023109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.023225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.023258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.023442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.023475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.023657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.023690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.023814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.023846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.024039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.024090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.024265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.024296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.024500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.024532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.024756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.024788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.024906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.024938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.025210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.025242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.025450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.025483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.025659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.025690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.025888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.025920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.026114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.026148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.026401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.026431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.026599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.026631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.144 [2024-12-06 11:29:16.026898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.144 [2024-12-06 11:29:16.026930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.144 qpair failed and we were unable to recover it. 00:27:43.422 [2024-12-06 11:29:16.027076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.422 [2024-12-06 11:29:16.027109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.422 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.027287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.027319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.027449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.027481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.027752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.027784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.028028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.028065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.028238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.028271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.028463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.028495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.028680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.028711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.028911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.028942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.029217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.029250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.029368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.029401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.029588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.029619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.029822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.029854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.030038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.030094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.030241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.030273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.030443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.030475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.030648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.030680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.030805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.030837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.030945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.030977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.031088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.031121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.031372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.031404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.031601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.031633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.031890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.031923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.032192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.032226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.032524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.032556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.032672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.032704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.032919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.032950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.033128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.033170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.033458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.033491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.033676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.033708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.033978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.034010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.423 qpair failed and we were unable to recover it. 00:27:43.423 [2024-12-06 11:29:16.034288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.423 [2024-12-06 11:29:16.034321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.034557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.034589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.034863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.034895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.035191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.035225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.035525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.035557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.035809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.035842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.036040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.036080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.036186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.036217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.036402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.036435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.036621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.036653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.036927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.036960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.037149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.037183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.037454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.037487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.037690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.037721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.037967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.037999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.038191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.038224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.038430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.038462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.038633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.038664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.038931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.038963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.039149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.039182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.039354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.039385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.039672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.039703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.039874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.039906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.040092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.040127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.040316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.040348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.040541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.040572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.040760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.040792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.040991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.041022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.041227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.041261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.041445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.041477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.041648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.041680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.041947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.041979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.042237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.424 [2024-12-06 11:29:16.042269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.424 qpair failed and we were unable to recover it. 00:27:43.424 [2024-12-06 11:29:16.042468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.042499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.042797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.042829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.042945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.042976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.043163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.043202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.043413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.043446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.043656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.043688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.043801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.043833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.043947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.043979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.044186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.044220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.044408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.044441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.044623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.044655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.044855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.044887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.045078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.045111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.045410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.045442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.045570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.045602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.045785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.045816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.046101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.046134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.046265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.046298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.046467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.046498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.046664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.046696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.046908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.046939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.047131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.047164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.047369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.047401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.047572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.047604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.047707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.047738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.047852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.047883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.048099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.048133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.048386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.048417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.048686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.048718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.048897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.048929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.049176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.049211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.049454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.425 [2024-12-06 11:29:16.049486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.425 qpair failed and we were unable to recover it. 00:27:43.425 [2024-12-06 11:29:16.049658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.049690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.049808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.049839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.050105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.050137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.050313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.050344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.050518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.050549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.050719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.050752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.050936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.050968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.051151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.051183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.051288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.051319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.051496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.051528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.051700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.051732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.051914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.051952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.052155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.052189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.052327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.052359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.052627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.052658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.052858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.052890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.053016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.053048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.053328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.053361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.053566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.053599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.053723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.053756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.053937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.053969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.054094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.054127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.054314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.054347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.054536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.054568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.054814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.054847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.055099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.055134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.055378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.055411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.055624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.055656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.055900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.055933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.056118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.056151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.056396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.056429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.056531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.056564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.426 [2024-12-06 11:29:16.056747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.426 [2024-12-06 11:29:16.056780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.426 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.056897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.056929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.057057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.057098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.057269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.057302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.057489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.057521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.057717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.057748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.057941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.057974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.058218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.058251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.058425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.058457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.058564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.058596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.058727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.058760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.058876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.058908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.059093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.059125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.059293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.059325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.059518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.059549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.059663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.059695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.059885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.059918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.060113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.060147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.060349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.060382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.060654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.060691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.060824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.060856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.061156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.061190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.061313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.061345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.061474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.061506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.061779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.061812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.062080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.062112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.062314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.062345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.062463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.062495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.062705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.062737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.062956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.062988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.063159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.063192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.063308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.063341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.063472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.063504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.427 [2024-12-06 11:29:16.063788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.427 [2024-12-06 11:29:16.063820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.427 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.064027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.064067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.064265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.064297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.064550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.064582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.064763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.064796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.065034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.065073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.065345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.065377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.065587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.065618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.065803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.065836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.066135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.066168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.066342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.066374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.066566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.066599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.066728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.066760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.066940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.066973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.067161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.067194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.067310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.067343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.067562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.067595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.067866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.067898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.068092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.068126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.068335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.068367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.068550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.068581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.068753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.068785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.068997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.069028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.069246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.069279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.069463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.069494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.069707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.069739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.069995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.070031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.070246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.428 [2024-12-06 11:29:16.070279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.428 qpair failed and we were unable to recover it. 00:27:43.428 [2024-12-06 11:29:16.070399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.070430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.070671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.070703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.070872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.070905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.071091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.071124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.071251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.071283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.071411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.071443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.071543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.071574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.071760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.071792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.072079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.072111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.072287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.072319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.072441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.072473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.072595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.072627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.072821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.072852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.073121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.073154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.073427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.073460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.073588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.073619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.073738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.073771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.074040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.074080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.074200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.074232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.074441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.074473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.074644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.074676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.074867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.074899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.075025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.075067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.075339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.075371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.075483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.075515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.075698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.075731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.075833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.075865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.076050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.076093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.076365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.076397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.076504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.076536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.076807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.076838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.429 [2024-12-06 11:29:16.077089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.429 [2024-12-06 11:29:16.077123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.429 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.077293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.077326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.077541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.077572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.077839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.077871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.078097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.078130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.078427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.078460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.078728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.078760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.079035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.079082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.079269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.079300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.079403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.079435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.079624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.079656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.079860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.079893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.080163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.080195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.080401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.080433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.080625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.080658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.080920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.080953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.081083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.081117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.081380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.081413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.081680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.081712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.081925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.081958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.082089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.082122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.082316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.082349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.082590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.082622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.082864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.082896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.083144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.083177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.083356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.083389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.083668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.083699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.083880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.083911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.084101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.084134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.084270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.084302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.084506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.084539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.084679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.084711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.084812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.084844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.430 [2024-12-06 11:29:16.085052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.430 [2024-12-06 11:29:16.085097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.430 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.085330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.085401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.085551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.085585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.085762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.085795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.086039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.086089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.086231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.086262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.086527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.086559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.086735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.086766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.086895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.086926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.087039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.087083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.087329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.087361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.087530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.087562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.087677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.087708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.087840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.087872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.087983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.088031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.088356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.088426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.088638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.088673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.088918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.088950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.089141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.089174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.089419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.089451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.089698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.089730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.089916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.089948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.090134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.090168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.090352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.090384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.090515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.090547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.090736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.090768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.090959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.090991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.091127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.091160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.091378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.091411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.091657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.091689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.091931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.091964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.092185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.092218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.092460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.092492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.092797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.431 [2024-12-06 11:29:16.092829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.431 qpair failed and we were unable to recover it. 00:27:43.431 [2024-12-06 11:29:16.093093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.093125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.093257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.093289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.093486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.093518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.093792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.093824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.094079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.094114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.094300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.094332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.094575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.094607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.094906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.094939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.095126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.095159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.095345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.095377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.095549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.095581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.095857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.095890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.096018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.096050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.096242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.096275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.096543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.096574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.096745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.096777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.096964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.096996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.097102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.097136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.097304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.097334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.097608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.097641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.097745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.097783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.097991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.098023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.098152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.098185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.098373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.098405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.098675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.098707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.098808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.098840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.098959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.098992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.099262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.099295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.099507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.099538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.099670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.099702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.099822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.099853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.100154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.100186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.432 [2024-12-06 11:29:16.100435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.432 [2024-12-06 11:29:16.100467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.432 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.100610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.100642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.100910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.100943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.101127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.101160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.101450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.101483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.101677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.101710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.101910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.101942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.102078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.102111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.102329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.102362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.102532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.102564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.102672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.102705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.102945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.102976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.103106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.103139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.103338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.103370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.103627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.103658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.103962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.103995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.104262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.104295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.104493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.104526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.104716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.104748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.104960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.104993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.105169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.105202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.105378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.105411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.105598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.105631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.105749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.105781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.105897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.105929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.106139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.106172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.106389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.106421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.106722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.433 [2024-12-06 11:29:16.106754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.433 qpair failed and we were unable to recover it. 00:27:43.433 [2024-12-06 11:29:16.106871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.106908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.107092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.107125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.107332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.107365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.107632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.107665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.107909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.107942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.108162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.108194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.108302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.108334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.108466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.108498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.108678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.108711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.108896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.108929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.109170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.109202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.109445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.109478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.109654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.109686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.109857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.109889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.110122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.110155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.110402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.110435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.110625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.110658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.110906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.110938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.111093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.111127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.111311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.111344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.111537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.111569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.111754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.111787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.111977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.112010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.112225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.112258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.112437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.112470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.112596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.112627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.112829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.112861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.112990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.113023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.113293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.113326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.113619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.113651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.113880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.113911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.114157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.114190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.114314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.114347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.434 [2024-12-06 11:29:16.114561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.434 [2024-12-06 11:29:16.114594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.434 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.114764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.114798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.115080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.115114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.115310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.115343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.115459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.115492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.115620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.115653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.115831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.115863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.116047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.116091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.116240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.116273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.116380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.116411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.116602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.116635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.116857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.116889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.117134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.117169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.117290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.117322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.117505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.117538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.117734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.117766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.118013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.118046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.118257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.118290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.118536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.118568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.118772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.118803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.118938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.118970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.119187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.119221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.119359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.119391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.119507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.119539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.119651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.119682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.119819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.119851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.120105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.120139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.120269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.120303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.120499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.120531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.120638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.120671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.120914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.120946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.121072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.121106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.121215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.121248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.435 [2024-12-06 11:29:16.121374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.435 [2024-12-06 11:29:16.121407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.435 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.121579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.121617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.121803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.121835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.122043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.122086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.122267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.122300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.122416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.122448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.122644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.122677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.122788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.122821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.123011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.123044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.123179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.123210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.123341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.123374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.123502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.123536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.123710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.123742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.123925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.123958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.124137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.124171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.124458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.124492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.124688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.124720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.124834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.124867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.125114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.125148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.125355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.125388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.125586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.125618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.125786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.125817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.125930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.125962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.126071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.126105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.126363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.126394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.126591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.126625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.126838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.126871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.127005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.127038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.127304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.127337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.127534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.127567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.127767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.127801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.127927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.127960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.128080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.128114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.436 [2024-12-06 11:29:16.128241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.436 [2024-12-06 11:29:16.128273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.436 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.128458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.128490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.128677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.128711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.128885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.128917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.129204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.129238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.129369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.129400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.129517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.129550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.129664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.129697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.129893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.129930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.130049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.130103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.130226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.130257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.130490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.130523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.130773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.130806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.130994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.131027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.131294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.131326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.131441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.131473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.131704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.131736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.131938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.131972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.132143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.132177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.132397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.132431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.132611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.132644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.132771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.132805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.132951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.132984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.133235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.133273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.133446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.133476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.133672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.133705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.133889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.133920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.134022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.134053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.134258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.134290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.134472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.134505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.134712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.134745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.134866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.134899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.135011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.135043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.135245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.135277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.437 [2024-12-06 11:29:16.135537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.437 [2024-12-06 11:29:16.135569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.437 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.135869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.135901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.136099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.136133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.136246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.136279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.136519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.136552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.136666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.136698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.136836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.136868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.137102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.137136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.137269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.137302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.137417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.137449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.137565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.137597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.137785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.137818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.137967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.137999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.138184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.138216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.138343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.138382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.138652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.138684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.138808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.138840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.139012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.139045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.139187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.139219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.139333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.139365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.139488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.139521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.139695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.139728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.139844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.139877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.140128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.140162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.140336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.140369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.140639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.140671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.140841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.140873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.140987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.438 [2024-12-06 11:29:16.141020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.438 qpair failed and we were unable to recover it. 00:27:43.438 [2024-12-06 11:29:16.141177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.141212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.141421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.141454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.141631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.141664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.141877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.141909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.142020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.142054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.142250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.142283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.142398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.142430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.142639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.142671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.142770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.142802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.143002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.143035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.143228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.143260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.143448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.143481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.143692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.143723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.143912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.143944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.144084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.144118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.144291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.144324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.144519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.144552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.144741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.144774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.144980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.145014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.145233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.145267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.145480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.145512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.145629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.145662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.145853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.145884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.146207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.146240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.146426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.146460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.146646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.146678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.146788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.146827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.147093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.147126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.147244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.147277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.147495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.147527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.147648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.147681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.147963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.147995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.439 qpair failed and we were unable to recover it. 00:27:43.439 [2024-12-06 11:29:16.148242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.439 [2024-12-06 11:29:16.148274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.148463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.148495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.148685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.148717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.148894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.148926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.149111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.149145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.149398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.149431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.149652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.149685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.149944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.149976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.150177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.150210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.150482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.150514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.150775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.150807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.150928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.150961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.151139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.151172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.151276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.151309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.151495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.151527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.151773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.151805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.151978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.152010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.152129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.152162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.152356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.152388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.152564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.152596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.152781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.152814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.153039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.153083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.153199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.153231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.153473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.153506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.153703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.153736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.153914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.153946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.154084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.154118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.154425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.154458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.154577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.154609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.154735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.154768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.155011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.155043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.155226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.155259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.155444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.155476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.155602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.440 [2024-12-06 11:29:16.155635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.440 qpair failed and we were unable to recover it. 00:27:43.440 [2024-12-06 11:29:16.155819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.155857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.155976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.156008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.156264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.156298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.156568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.156601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.156790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.156822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.157012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.157045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.157255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.157288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.157497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.157530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.157712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.157743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.157846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.157879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.157996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.158029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.158377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.158447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.158677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.158713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.158919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.158951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.159090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.159123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.159378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.159410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.159625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.159657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.159780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.159811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.159992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.160025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.160250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.160283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.160482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.160514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.160624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.160655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.160826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.160858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.161038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.161081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.161269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.161301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.161495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.161527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.161656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.161688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.161871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.161909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.162045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.162089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.162264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.162296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.162478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.162510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.162635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.162666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.441 [2024-12-06 11:29:16.162881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.441 [2024-12-06 11:29:16.162913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.441 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.163044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.163087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.163259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.163290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.163392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.163424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.163596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.163629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.163902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.163934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.164049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.164093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.164209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.164239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.164347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.164379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.164628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.164661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.164830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.164861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.165118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.165150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.165327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.165360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.165479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.165510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.165614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.165646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.165771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.165803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.165929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.165961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.166093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.166125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.166334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.166366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.166468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.166500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.166795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.166828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.166936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.166967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.167235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.167268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.167411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.167444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.167644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.167676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.167945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.167976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.168169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.168203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.168397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.168428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.168687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.168719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.168899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.168932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.169208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.169241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.169434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.169467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.169733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.169765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.169878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.442 [2024-12-06 11:29:16.169911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.442 qpair failed and we were unable to recover it. 00:27:43.442 [2024-12-06 11:29:16.170119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.170153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.170456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.170488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.170621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.170656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.170790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.170821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.171006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.171039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.171337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.171371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.171610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.171643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.171828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.171859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.172106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.172139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.172332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.172364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.172556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.172587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.172857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.172889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.173140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.173174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.173360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.173391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.173671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.173703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.173972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.174003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.174152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.174186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.174313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.174343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.174463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.174496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.174792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.174823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.175075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.175109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.175345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.175377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.175560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.175591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.175805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.175837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.176036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.176079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.176204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.176236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.176357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.176388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.176559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.176590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.176714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.176746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.176988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.177029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.177247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.177279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.177546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.177578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.177802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.443 [2024-12-06 11:29:16.177833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.443 qpair failed and we were unable to recover it. 00:27:43.443 [2024-12-06 11:29:16.178094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.178128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.178393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.178426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.178628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.178660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.178939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.178971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.179259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.179292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.179482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.179512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.179763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.179795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.179988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.180020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.180214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.180247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.180422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.180453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.180722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.180755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.180996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.181028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.181280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.181314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.181438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.181469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.181698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.181728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.181906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.181938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.182054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.182097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.182289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.182321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.182442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.182474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.182697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.182729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.182847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.182878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.183081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.183114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.183305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.183336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.183634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.183666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.183783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.183816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.184004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.184037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.184291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.184324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.184515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.184547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.184673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.184705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.444 [2024-12-06 11:29:16.184906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.444 [2024-12-06 11:29:16.184937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.444 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.185275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.185308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.185467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.185500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.185712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.185744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.185914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.185945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.186219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.186252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.186454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.186486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.186738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.186770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.187073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.187106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.187373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.187405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.187593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.187625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.187834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.187866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.188133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.188167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.188455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.188487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.188604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.188637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.188919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.188951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.189130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.189163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.189335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.189366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.189637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.189670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.189855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.189887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.190077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.190110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.190306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.190337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.190557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.190590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.190849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.190881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.191014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.191047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.191239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.191271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.191538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.191569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.191867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.191901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.192193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.192227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.192422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.192454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.192707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.192739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.192925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.192958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.193143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.193176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.193440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.193472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.193678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.193710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.193975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.445 [2024-12-06 11:29:16.194013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.445 qpair failed and we were unable to recover it. 00:27:43.445 [2024-12-06 11:29:16.194307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.194340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.194588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.194621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.194796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.194828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.195078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.195111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.195321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.195354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.195456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.195489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.195731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.195763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.196089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.196124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.196356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.196388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.196596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.196628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.196811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.196844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.197121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.197154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.197425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.197457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.197749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.197783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.198000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.198030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.198311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.198344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.198625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.198658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.198960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.198992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.199257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.199290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.199538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.199570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.199883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.199914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.200197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.200231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.200514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.200546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.200856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.200888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.201125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.201159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.201264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.201297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.201513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.201545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.201822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.201853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.202054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.202097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.202233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.202266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.202532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.202565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.202780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.202813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.202987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.203020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.203296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.446 [2024-12-06 11:29:16.203330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.446 qpair failed and we were unable to recover it. 00:27:43.446 [2024-12-06 11:29:16.203606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.203637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.203926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.203958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.204215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.204248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.204447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.204478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.204661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.204693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.204895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.204927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.205158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.205198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.205496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.205528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.205797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.205829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.206018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.206050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.206255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.206287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.206534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.206566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.206756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.206787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.207071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.207104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.207278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.207310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.207578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.207610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.207906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.207937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.208186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.208218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.208507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.208539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.208732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.208766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.208970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.209003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.209283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.209315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.209546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.209578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.209762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.209794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.209978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.210011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.210264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.210298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.210549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.210580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.210712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.210744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.210987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.211020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.211278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.211311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.211527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.211559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.211815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.211848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.212037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.447 [2024-12-06 11:29:16.212081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.447 qpair failed and we were unable to recover it. 00:27:43.447 [2024-12-06 11:29:16.212356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.212393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.212521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.212553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.212848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.212881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.213176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.213211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.213346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.213379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.213587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.213619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.213870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.213902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.214163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.214197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.214414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.214446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.214638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.214670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.214901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.214933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.215205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.215240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.215449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.215481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.215691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.215724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.215998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.216031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.216240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.216271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.216473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.216505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.216624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.216657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.216953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.216984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.217198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.217233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.217354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.217387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.217636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.217668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.217850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.217881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.217987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.218019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.218161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.218193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.218445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.218477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.218582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.218614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.218826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.218858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.219156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.219189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.219401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.219433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.219648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.219680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.219955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.219988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.220169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.220202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.220469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.220501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.448 [2024-12-06 11:29:16.220754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.448 [2024-12-06 11:29:16.220786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.448 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.221015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.221046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.221354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.221386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.221579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.221611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.221888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.221921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.222110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.222144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.222320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.222354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.222544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.222581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.222771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.222804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.223048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.223090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.223339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.223370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.223482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.223514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.223761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.223794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.224081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.224115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.224390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.224423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.224601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.224633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.224823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.224854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.224971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.225002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.225282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.225315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.225507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.225539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.225794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.225827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.225953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.225985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.226092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.226127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.226342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.226374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.226682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.226714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.226919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.226951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.227223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.227257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.227533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.227565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.227782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.227813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.228105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.449 [2024-12-06 11:29:16.228139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.449 qpair failed and we were unable to recover it. 00:27:43.449 [2024-12-06 11:29:16.228334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.228367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.228587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.228619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.228748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.228779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.228902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.228934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.229148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.229193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.229306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.229337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.229557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.229591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.229799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.229832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.230025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.230057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.230294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.230327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.230544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.230576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.230823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.230856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.231107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.231141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.231367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.231400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.231597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.231631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.231805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.231836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.232027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.232082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.232345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.232377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.232606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.232639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.232864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.232896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.233176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.233210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.233458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.233490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.233697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.233730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.234002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.234035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.234330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.234363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.234634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.234667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.234918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.234951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.235199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.235233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.235490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.235523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.235821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.235854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.236137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.236169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.236357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.236389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.236608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.236641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.236888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.450 [2024-12-06 11:29:16.236920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.450 qpair failed and we were unable to recover it. 00:27:43.450 [2024-12-06 11:29:16.237178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.237211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.237412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.237444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.237638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.237670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.237848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.237880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.238128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.238162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.238358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.238390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.238703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.238736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.238938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.238971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.239164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.239198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.239410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.239442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.239739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.239772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.240042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.240104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.240400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.240433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.240624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.240656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.240833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.240864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.241141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.241174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.241464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.241497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.241671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.241704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.241818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.241849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.242025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.242068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.242293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.242325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.242603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.242635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.242760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.242792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.242968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.243000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.243293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.243328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.243526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.243558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.243837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.243869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.244077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.244111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.244378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.244410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.244712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.244744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.244939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.244971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.245222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.245257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.245393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.245426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.245628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.245661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.451 qpair failed and we were unable to recover it. 00:27:43.451 [2024-12-06 11:29:16.245909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.451 [2024-12-06 11:29:16.245940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.246193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.246227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.246487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.246520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.246729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.246760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.246881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.246918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.247096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.247131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.247341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.247373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.247685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.247717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.247915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.247947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.248194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.248228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.248346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.248379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.248683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.248715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.248954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.248986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.249177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.249211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.249398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.249430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.249575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.249608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.249788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.249820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.250097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.250129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.250281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.250314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.250508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.250539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.250815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.250848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.251124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.251157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.251358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.251391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.251644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.251677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.251856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.251888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.252194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.252227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.252521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.252554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.252851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.252883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.253019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.253050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.253193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.253226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.253501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.253533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.253836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.253868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.254081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.254114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.254295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.254328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.452 [2024-12-06 11:29:16.254607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.452 [2024-12-06 11:29:16.254638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.452 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.254777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.254808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.255117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.255152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.255331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.255363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.255566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.255598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.255792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.255825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.256106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.256140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.256391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.256424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.256618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.256650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.256780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.256812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.257097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.257131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.257413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.257452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.257716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.257747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.258030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.258073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.258346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.258379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.258659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.258691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.258957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.258989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.259209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.259242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.259448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.259480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.259731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.259763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.259876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.259908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.260173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.260207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.260387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.260418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.260669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.260701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.260948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.260980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.261166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.261200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.261400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.261433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.261612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.261643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.261896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.261928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.262199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.262232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.262437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.262468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.262672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.262704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.263006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.263038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.453 [2024-12-06 11:29:16.263251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.453 [2024-12-06 11:29:16.263284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.453 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.263559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.263591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.263858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.263891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.264078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.264111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.264302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.264334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.264552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.264585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.264716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.264749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.264929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.264961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.265166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.265200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.265384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.265416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.265664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.265696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.265914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.265946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.266146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.266181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.266400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.266431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.266613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.266646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.266914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.266945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.267136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.267170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.267346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.267379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.267640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.267672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.267961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.267993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.268251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.268283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.268564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.268597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.268888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.268919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.269206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.269239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.269494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.269526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.269803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.269834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.270086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.270118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.270316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.270348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.270547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.270578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.454 qpair failed and we were unable to recover it. 00:27:43.454 [2024-12-06 11:29:16.270712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.454 [2024-12-06 11:29:16.270744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.270940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.270973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.271262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.271296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.271510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.271542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.271774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.271807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.272090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.272123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.272408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.272440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.272580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.272613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.272736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.272768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.273043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.273089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.273271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.273302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.273501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.273533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.273746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.273777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.274053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.274096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.274275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.274308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.274586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.274619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.274928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.274960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.275227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.275266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.275447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.275479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.275700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.275732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.275987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.276019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.276261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.276295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.276421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.276454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.276743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.276776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.276967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.276999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.277209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.277242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.277519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.277551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.277763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.277796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.278012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.278044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.278246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.278277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.278553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.278585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.278882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.278915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.279041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.455 [2024-12-06 11:29:16.279085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.455 qpair failed and we were unable to recover it. 00:27:43.455 [2024-12-06 11:29:16.279366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.279400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.279651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.279682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.279883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.279916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.280188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.280222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.280403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.280436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.280592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.280625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.280816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.280848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.281101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.281136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.281440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.281473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.281736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.281768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.281962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.281994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.282250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.282283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.282571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.282604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.282889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.282922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.283213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.283247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.283524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.283556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.283841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.283872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.284089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.284122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.284355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.284388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.284516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.284549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.284740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.284771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.284908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.284940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.285222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.285257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.285510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.285542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.285848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.285881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.286173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.286213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.286437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.286470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.286753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.286784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.287020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.287051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.287359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.287392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.287652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.287684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.287877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.287908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.288191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.288224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.288498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.288529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.456 [2024-12-06 11:29:16.288823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.456 [2024-12-06 11:29:16.288855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.456 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.289135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.289167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.289487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.289520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.289817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.289849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.290085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.290119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.290407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.290440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.290703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.290735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.291042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.291084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.291264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.291296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.291476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.291507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.291791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.291826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.292129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.292162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.292294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.292326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.292608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.292640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.292820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.292852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.293128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.293162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.293448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.293480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.293691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.293724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.294005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.294045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.294321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.294353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.294643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.294675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.294962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.294994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.295275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.295309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.295607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.295640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.295912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.295945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.296275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.296309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.296505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.296537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.296820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.296851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.297139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.297173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.297480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.297512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.297783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.297816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.297998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.298030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.298319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.298353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.457 [2024-12-06 11:29:16.298480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.457 [2024-12-06 11:29:16.298512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.457 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.298793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.298824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.299084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.299118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.299379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.299412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.299627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.299659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.299920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.299952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.300208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.300242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.300550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.300581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.300866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.300898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.301183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.301216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.301498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.301529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.301728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.301760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.302020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.302052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.302304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.302337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.302521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.302554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.302862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.302893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.303147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.303181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.303368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.303400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.303689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.303721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.303923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.303954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.304157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.304192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.304380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.304411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.304694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.304725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.304996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.305029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.305333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.305364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.305675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.305706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.305969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.306006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.306312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.306346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.306585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.306616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.306892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.306925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.307180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.307212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.307525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.307557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.307845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.307878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.308081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.308113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.308371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.458 [2024-12-06 11:29:16.308403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.458 qpair failed and we were unable to recover it. 00:27:43.458 [2024-12-06 11:29:16.308608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.308640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.308915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.308946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.309129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.309163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.309375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.309406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.309691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.309723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.310014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.310046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.310182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.310215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.310496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.310528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.310656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.310688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.311001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.311033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.311327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.311360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.311669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.311701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.311910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.311942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.312202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.312235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.312543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.312574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.312849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.312882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.313101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.313134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.313394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.313427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.313719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.313757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.313966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.313998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.314315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.314349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.314559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.314591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.314875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.314908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.315220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.315254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.315382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.315413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.315695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.315727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.315949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.315982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.316168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.316202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.459 [2024-12-06 11:29:16.316467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.459 [2024-12-06 11:29:16.316498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.459 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.316759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.316791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.317081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.317115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.317324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.317357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.317581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.317613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.317892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.317925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.318141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.318175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.318455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.318488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.318801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.318833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.319039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.319081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.319378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.319410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.319707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.319739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.320020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.320052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.320264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.320296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.320446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.320477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.320775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.320807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.321112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.321145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.321384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.321416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.321560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.321593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.321868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.321900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.322185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.322219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.322449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.322482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.322725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.322757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.322971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.323003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.323324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.323357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.323613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.323644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.323931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.323963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.324162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.324196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.324379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.324410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.324668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.324700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.324829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.324862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.325147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.325186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.325369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.325402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.325599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.460 [2024-12-06 11:29:16.325631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.460 qpair failed and we were unable to recover it. 00:27:43.460 [2024-12-06 11:29:16.325915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.325945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.326149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.326182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.326380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.326413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.326694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.326726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.326926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.326958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.327216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.327250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.327474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.327505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.327713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.327744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.327967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.327999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.328164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.328197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.328481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.328513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.328732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.328765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.329019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.329051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.329270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.329303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.329515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.329547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.329861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.329892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.330182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.330216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.330498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.330530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.330825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.330857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.330984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.331017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.331320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.331355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.331603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.331634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.331929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.331962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.332240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.332273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.332544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.332582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.332876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.332908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.333201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.333235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.333513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.333544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.333834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.333866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.334100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.334134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.334333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.334364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.334567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.334599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.334868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.334903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.335193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.335224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.335508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.461 [2024-12-06 11:29:16.335540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.461 qpair failed and we were unable to recover it. 00:27:43.461 [2024-12-06 11:29:16.335739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.335772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.335995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.336026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.336329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.336362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.336618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.336697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.336845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.336881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.337168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.337203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.337472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.337504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.337794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.337825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.338161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.338195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.338431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.338464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.338740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.338772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.339080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.339113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.339291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.339324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.339606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.339637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.339864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.462 [2024-12-06 11:29:16.339896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.462 qpair failed and we were unable to recover it. 00:27:43.462 [2024-12-06 11:29:16.340210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.740 [2024-12-06 11:29:16.340243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.740 qpair failed and we were unable to recover it. 00:27:43.740 [2024-12-06 11:29:16.340503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.740 [2024-12-06 11:29:16.340545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.740 qpair failed and we were unable to recover it. 00:27:43.740 [2024-12-06 11:29:16.340763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.740 [2024-12-06 11:29:16.340795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.740 qpair failed and we were unable to recover it. 00:27:43.740 [2024-12-06 11:29:16.340978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.740 [2024-12-06 11:29:16.341011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.740 qpair failed and we were unable to recover it. 00:27:43.740 [2024-12-06 11:29:16.341246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.341279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.341536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.341570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.341764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.341796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.341931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.341963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.342144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.342178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.342437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.342470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.342777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.342809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.343111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.343144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.343359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.343391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.343588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.343619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.343814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.343846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.344138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.344171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.344353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.344386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.344570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.344602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.344914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.344947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.345188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.345221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.345430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.345463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.345737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.345769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.345949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.345982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.346266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.346299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.346534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.346565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.346846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.346878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.347078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.347111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.347417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.347448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.347804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.347881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.348192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.348232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.348450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.348484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.741 [2024-12-06 11:29:16.348697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.741 [2024-12-06 11:29:16.348730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.741 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.348983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.349015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.349282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.349315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.349572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.349605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.349918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.349950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.350235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.350269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.350563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.350595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.350720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.350753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.351012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.351043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.351267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.351300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.351499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.351547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.351836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.351868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.352078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.352112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.352294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.352327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.352582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.352614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.352889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.352921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.353121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.353154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.353336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.353368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.353577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.353609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.353794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.353826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.354029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.354074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.354303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.354335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.354602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.354634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.354854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.354886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.355093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.355126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.355411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.355442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.355642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.355675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.742 [2024-12-06 11:29:16.355870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.742 [2024-12-06 11:29:16.355901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.742 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.356188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.356222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.356405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.356437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.356715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.356748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.357039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.357084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.357357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.357390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.357675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.357707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.358050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.358097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.358397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.358430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.358670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.358702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.359017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.359051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.359283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.359316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.359605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.359637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.359862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.359896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.360090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.360124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.360385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.360419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.360619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.360651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.360877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.360910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.361222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.361257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.361466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.361499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.361697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.361730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.361917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.361951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.362238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.362272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.362581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.362621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.362908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.362940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.363150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.363184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.363452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.363485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.363801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.743 [2024-12-06 11:29:16.363834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.743 qpair failed and we were unable to recover it. 00:27:43.743 [2024-12-06 11:29:16.364114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.364148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.364271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.364304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.364587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.364621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.364831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.364863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.365081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.365116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.365374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.365406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.365589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.365622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.365833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.365866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.366020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.366052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.366277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.366310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.366636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.366670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.366901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.366936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.367154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.367190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.367419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.367453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.367713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.367746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.368030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.368076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.368263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.368296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.368426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.368457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.368693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.368725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.369020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.369053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.369368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.369401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.369613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.369647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.369858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.369893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.370083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.370118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.370377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.370410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.370646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.370679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.370942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.370975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.371266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.371300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.744 [2024-12-06 11:29:16.371552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.744 [2024-12-06 11:29:16.371586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.744 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.371895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.371927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.372202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.372237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.372380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.372413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.372618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.372650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.372870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.372903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.373091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.373125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.373423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.373458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.373682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.373714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.373918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.373951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.374149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.374184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.374392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.374424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.374578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.374611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.374807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.374841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.375101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.375135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.375453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.375486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.375693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.375726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.375916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.375951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.376246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.376279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.376487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.376520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.376649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.376682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.376973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.377007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.377260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.377297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.377521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.377555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.377772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.377805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.378020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.378053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.378408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.378442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.378702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.378736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.379025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.745 [2024-12-06 11:29:16.379070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.745 qpair failed and we were unable to recover it. 00:27:43.745 [2024-12-06 11:29:16.379258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.379292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.379507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.379540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.379677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.379711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.380017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.380049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.380261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.380295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.380442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.380481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.380787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.380822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.381029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.381074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.381362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.381395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.381747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.381780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.381979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.382012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.382242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.382276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.382461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.382494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.382775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.382809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.383123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.383157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.383348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.383381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.383491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.383524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.383799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.383831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.384010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.384044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.384288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.384322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.384550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.384584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.384831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.384865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.385049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.385109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.385309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.385341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.385587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.385620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.385917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.746 [2024-12-06 11:29:16.385951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.746 qpair failed and we were unable to recover it. 00:27:43.746 [2024-12-06 11:29:16.386073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.386107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.386314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.386348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.386496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.386529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.386664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.386697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.386977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.387010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.387257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.387292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.387541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.387575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.387822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.387855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.388221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.388256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.388514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.388548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.388854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.388886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.389178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.389212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.389355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.389389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.389622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.389657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.389855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.389889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.390087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.390121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.390250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.390285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.390431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.390464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.390787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.390821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.391089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.391129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.391442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.391477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.391769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.391802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.392020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.392054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.392284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.392320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.392593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.392626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.747 qpair failed and we were unable to recover it. 00:27:43.747 [2024-12-06 11:29:16.392827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.747 [2024-12-06 11:29:16.392860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.393072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.393107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.393370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.393404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.393665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.393699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.393965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.393998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.394314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.394350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.394562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.394595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.394872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.394905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.395115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.395150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.395360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.395394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.395521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.395555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.395890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.395924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.396177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.396211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.396362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.396396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.396584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.396616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.396920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.396954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.397154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.397187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.397315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.397348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.397534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.397567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.397699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.397732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.398036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.398079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.398296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.398330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.398518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.398553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.398761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.398796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.398926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.398958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.399142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.399176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.399375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.399407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.399613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-12-06 11:29:16.399646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.748 qpair failed and we were unable to recover it. 00:27:43.748 [2024-12-06 11:29:16.399933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.399966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.400097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.400130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.400389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.400422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.400625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.400659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.400937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.400970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.401259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.401296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.401510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.401549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.401839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.401871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.402188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.402224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.402541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.402575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.402869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.402904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.403125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.403161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.403376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.403409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.403643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.403675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.403824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.403858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.404085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.404119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.404335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.404368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.404572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.404605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.404893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.404926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.405177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.405212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.405442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.405477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.405794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.405828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.406092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.406126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.406261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.406294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.406487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.406520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.406756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.406792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.407051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.407115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.407403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.407438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.407651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.749 [2024-12-06 11:29:16.407684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.749 qpair failed and we were unable to recover it. 00:27:43.749 [2024-12-06 11:29:16.407919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.407952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.408167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.408201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.408467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.408500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.408790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.408823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.409042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.409089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.409274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.409308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.409435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.409467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.409726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.409759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.410042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.410086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.410307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.410340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.410599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.410632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.410908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.410941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.411144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.411177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.411294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.411328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.411562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.411595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.411904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.411937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.412143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.412177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.412381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.412421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.412550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.412582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.412926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.412960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.413216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.413250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.413595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.413630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.413846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.413878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.414004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.414037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.414187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.414220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.414444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.414477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.414664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.414697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.414978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.415010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.750 qpair failed and we were unable to recover it. 00:27:43.750 [2024-12-06 11:29:16.415190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.750 [2024-12-06 11:29:16.415225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.415362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.415394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.415554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.415588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.415815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.415848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.416094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.416129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.416393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.416427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.416646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.416680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.416809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.416844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.417152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.417186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.417314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.417346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.417551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.417585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.417789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.417823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.418082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.418117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.418336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.418369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.418590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.418623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.418903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.418936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.419224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.419258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.419540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.419572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.419798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.419830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.420025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.420071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.420246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.420277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.420573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.420605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.420905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.420937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.421214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.421246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.421430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.421463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.421676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.421709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.421985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.422017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.422251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.422286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.751 [2024-12-06 11:29:16.422445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.751 [2024-12-06 11:29:16.422477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.751 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.422750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.422788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.423000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.423035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.423318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.423351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.423501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.423534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.423761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.423794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.424084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.424119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.424419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.424453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.424687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.424719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.424924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.424957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.425107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.425142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.425293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.425325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.425529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.425560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.425773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.425805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.426026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.426071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.426303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.426336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.426542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.426575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.426914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.426945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.427233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.427268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.427551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.427583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.427909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.427941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.428159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.428194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.428395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.428428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.428584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.428616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.428923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.428956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.429245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.429280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.429538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.429570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.429853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.429885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.430199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.430233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.430437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.752 [2024-12-06 11:29:16.430469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.752 qpair failed and we were unable to recover it. 00:27:43.752 [2024-12-06 11:29:16.430679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.430711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.430903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.430935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.431160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.431194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.431343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.431375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.431507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.431539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.431762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.431795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.432132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.432165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.432303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.432336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.432580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.432613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.432832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.432863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.433121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.433155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.433283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.433322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.433463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.433498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.433681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.433714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.433932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.433966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.434102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.434137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.434429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.434461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.434732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.434766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.435045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.435108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.435229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.435262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.435580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.435612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.435795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.435827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.435975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.436007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.436220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.753 [2024-12-06 11:29:16.436254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.753 qpair failed and we were unable to recover it. 00:27:43.753 [2024-12-06 11:29:16.436473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.436506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.436821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.436855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.437154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.437188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.437462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.437496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.437848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.437880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.438093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.438126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.438354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.438387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.438697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.438730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.438934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.438967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.439220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.439254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.439467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.439500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.439804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.439837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.440043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.440086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.440318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.440352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.440558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.440591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.440717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.440749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.441032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.441077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.441265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.441298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.441557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.441588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.441779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.441812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.442025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.442071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.442280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.442313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.442570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.442603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.442826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.442859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.443147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.443179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.443384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.443418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.443670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.443703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.443911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.443950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.444141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.754 [2024-12-06 11:29:16.444175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.754 qpair failed and we were unable to recover it. 00:27:43.754 [2024-12-06 11:29:16.444374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.444407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.444696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.444729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.444917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.444951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.445084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.445117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.445348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.445387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.445603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.445636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.445839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.445872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.446130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.446164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.446389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.446423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.446602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.446634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.446960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.446993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.447241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.447274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.447595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.447627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.447890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.447923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.448147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.448181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.448328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.448361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.448664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.448696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.448922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.448955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.449221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.449256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.449486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.449518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.449715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.449748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.450077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.450111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.450319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.450352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.450590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.450622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.450919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.450951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.451252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.451289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.451559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.451591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.451733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.451765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.755 [2024-12-06 11:29:16.451957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.755 [2024-12-06 11:29:16.451991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.755 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.452202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.452235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.452470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.452503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.452772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.452805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.453034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.453076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.453283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.453316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.453510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.453543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.453820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.453853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.454051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.454113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.454428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.454461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.454720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.454758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.455046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.455094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.455416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.455448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.455688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.455720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.455913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.455945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.456139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.456174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.456380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.456412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.456606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.456639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.456844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.456877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.457175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.457209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.457500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.457533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.457854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.457887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.458099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.458132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.458359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.458392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.458710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.458744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.459029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.459073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.459376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.459409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.459614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.459646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.459865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.459898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.756 [2024-12-06 11:29:16.460235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.756 [2024-12-06 11:29:16.460269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.756 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.460566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.460599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.460864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.460896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.461133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.461167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.461401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.461433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.461745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.461778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.462011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.462043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.462291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.462324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.462589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.462666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.462978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.463014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.463260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.463296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.463611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.463644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.463937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.463970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.464254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.464287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.464490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.464523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.464791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.464823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.464977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.465010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.465354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.465387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.465606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.465638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.465910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.465941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.466149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.466182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.466464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.466509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.466714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.466747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.467032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.467074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.467229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.467262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.467470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.467503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.467702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.467735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.467977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.468010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.468166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.757 [2024-12-06 11:29:16.468200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.757 qpair failed and we were unable to recover it. 00:27:43.757 [2024-12-06 11:29:16.468398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.468431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.468660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.468692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.468926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.468959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.469182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.469216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.469430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.469463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.469657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.469690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.469908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.469943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.470085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.470119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.470375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.470408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.470665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.470697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.470897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.470930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.471138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.471171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.471458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.471491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.471699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.471732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.471965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.471999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.472216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.472250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.472453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.472486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.472722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.472755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.473039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.473083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.473359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.473392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.473679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.473712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.473922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.473954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.474220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.474253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.474388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.474422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.474578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.474610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.474970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.475003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.475228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.475260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.758 qpair failed and we were unable to recover it. 00:27:43.758 [2024-12-06 11:29:16.475462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.758 [2024-12-06 11:29:16.475495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.475700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.475732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.475998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.476030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.476229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.476263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.476537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.476569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.476772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.476811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.477078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.477112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.477394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.477426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.477656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.477689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.477905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.477937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.478203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.478236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.478450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.478483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.478664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.478696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.478982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.479014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.479305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.479339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.479623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.479656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.479784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.479816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.480106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.480139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.480363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.480396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.480676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.480708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.480830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.480863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.481203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.481236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.481437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.759 [2024-12-06 11:29:16.481470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.759 qpair failed and we were unable to recover it. 00:27:43.759 [2024-12-06 11:29:16.481670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.481702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.481901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.481933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.482148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.482182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.482316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.482349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.482577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.482609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.482738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.482771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.482913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.482945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.483227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.483260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.483511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.483544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.483692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.483725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.484009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.484042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.484274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.484307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.484529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.484563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.484839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.484871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.485079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.485112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.485318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.485350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.485636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.485669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.485926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.485958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.486157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.486189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.486448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.486481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.486745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.486779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.486977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.487009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.487238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.487277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.487533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.487566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.487871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.487902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.488131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.488164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.488369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.488402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.488583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.488615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.760 qpair failed and we were unable to recover it. 00:27:43.760 [2024-12-06 11:29:16.488910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.760 [2024-12-06 11:29:16.488944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.489186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.489220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.489428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.489461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.489665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.489698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.489834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.489866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.490148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.490182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.490413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.490445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.490732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.490765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.490991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.491023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.491167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.491201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.491395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.491429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.491564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.491596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.491930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.491962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.492173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.492206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.492351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.492384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.492518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.492551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.492873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.492906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.493096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.493130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.493349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.493382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.493506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.493538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.493858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.493890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.494101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.494135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.494262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.494295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.494527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.494559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.494848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.494880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.495114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.495147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.495333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.495366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.495485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.495517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.495846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.495879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.496191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.496225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.496431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.496464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.496664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.496697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.496978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.497010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.497203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.497237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.497501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.497539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.497803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.497836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.761 qpair failed and we were unable to recover it. 00:27:43.761 [2024-12-06 11:29:16.498165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.761 [2024-12-06 11:29:16.498198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.498459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.498492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.498802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.498834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.499100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.499134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.499269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.499301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.499500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.499532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.499812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.499846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.500045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.500089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.500324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.500356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.500652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.500683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.500964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.500997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.501293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.501326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.501603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.501636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.501853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.501885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.502145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.502178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.502465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.502498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.502622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.502654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.502871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.502904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.503098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.503131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.503424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.503457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.503663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.503696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.503880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.503912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.504108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.504141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.504446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.504479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.504675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.504706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.504910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.504943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.505231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.505264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.505551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.505585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.505896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.505928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.506212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.506245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.506559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.506592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.506865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.506897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.507193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.507226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.507553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.507586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.507828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.507860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.508066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.508099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.508357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.762 [2024-12-06 11:29:16.508389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.762 qpair failed and we were unable to recover it. 00:27:43.762 [2024-12-06 11:29:16.508693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.508726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.508852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.508883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.509169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.509202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.509398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.509431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.509577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.509608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.509872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.509905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.510087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.510122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.510247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.510282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.510490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.510523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.510936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.510969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.511285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.511319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.511511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.511544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.511701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.511735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.512031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.512092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.512226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.512259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.512474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.512507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.512816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.512850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.513137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.513170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.513357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.513391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.513675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.513709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.513975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.514008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.514134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.514167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.514373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.514406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.514710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.514743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.515036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.515080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.515301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.515334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.515538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.515572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.515791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.515825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.516033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.516086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.516241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.516274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.516477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.516510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.516787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.516819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.517093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.517127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.517335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.517368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.517521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.517554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.517863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.517896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.763 qpair failed and we were unable to recover it. 00:27:43.763 [2024-12-06 11:29:16.518179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.763 [2024-12-06 11:29:16.518212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.518411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.518444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.518706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.518739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.519019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.519052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.519364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.519397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.519529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.519562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.519775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.519809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.520102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.520136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.520344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.520377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.520525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.520558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.520882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.520914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.521197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.521232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.521434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.521467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.521747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.521781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.522079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.522113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.522377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.522411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.522630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.522662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.522921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.522954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.523089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.523123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.523351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.523383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.523652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.523686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.523882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.523914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.524166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.524200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.524410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.524443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.524627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.524659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.524865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.524898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.525116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.525150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.525464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.525496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.525622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.525654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.525778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.525811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.526014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.526047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.526215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.526249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.764 qpair failed and we were unable to recover it. 00:27:43.764 [2024-12-06 11:29:16.526472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.764 [2024-12-06 11:29:16.526509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.526632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.526664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.526919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.526952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.527266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.527300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.527450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.527483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.527739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.527771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.527903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.527935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.528166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.528200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.528417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.528451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.528644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.528677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.528960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.528993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.529196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.529231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.529522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.529555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.529696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.529729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.529933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.529966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.530231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.530266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.530523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.530555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.530911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.530945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.531189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.531223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.531434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.531466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.531655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.531688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.531892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.531925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.532194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.532228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.532453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.532486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.532633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.532666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.532953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.532986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.533213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.533247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.533415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.533449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.533736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.533768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.534082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.534116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.534319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.534353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.534550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.534583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.534914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.534948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.535134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.535168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.535297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.535330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.535542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.535575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.535782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.765 [2024-12-06 11:29:16.535815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.765 qpair failed and we were unable to recover it. 00:27:43.765 [2024-12-06 11:29:16.536131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.536166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.536444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.536478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.536757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.536791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.536934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.536973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.537179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.537213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.537353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.537386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.537603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.537636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.537750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.537782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.538056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.538116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.538269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.538302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.538560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.538594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.538743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.538776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.538959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.538992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.539170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.539204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.539459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.539492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.539611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.539643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.539946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.539979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.540221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.540255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.540438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.540471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.540729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.540762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.541027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.541070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.541303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.541334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.541502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.541535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.541734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.541767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.541974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.542006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.542206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.542240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.542462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.542494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.542795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.542828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.543085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.543119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.543267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.543299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.543587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.543621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.543914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.543947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.544226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.544261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.544479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.544512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.544826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.544859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.545139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.545174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.766 qpair failed and we were unable to recover it. 00:27:43.766 [2024-12-06 11:29:16.545428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.766 [2024-12-06 11:29:16.545462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.545746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.545779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.546071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.546105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.546303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.546336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.546470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.546503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.546792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.546825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.547115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.547149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.547429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.547468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.547634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.547667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.547951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.547983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.548251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.548285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.548591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.548623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.548911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.548944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.549189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.549223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.549427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.549460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.549617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.549651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.549856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.549889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.550192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.550226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.550430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.550464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.550654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.550687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.550920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.550953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.551177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.551212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.551413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.551446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.551637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.551670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.551871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.551904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.552084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.552118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.552320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.552353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.552613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.552645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.552850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.552882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.553142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.553176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.553408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.553441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.553698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.553731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.553935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.553967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.554166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.554199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.554490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.554523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.554856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.554888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.555238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.767 [2024-12-06 11:29:16.555274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.767 qpair failed and we were unable to recover it. 00:27:43.767 [2024-12-06 11:29:16.555472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.555504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.555701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.555734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.556007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.556039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.556254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.556288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.556496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.556529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.556851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.556883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.557158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.557193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.557452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.557483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.557692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.557725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.558051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.558104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.558388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.558426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.558705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.558737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.559028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.559072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.559346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.559381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.559655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.559688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.559901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.559933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.560211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.560245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.560392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.560426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.560624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.560657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.560910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.560941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.561188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.561221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.561482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.561516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.561789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.561821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.561955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.561988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.562231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.562265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.562472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.562503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.562810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.562842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.563112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.563144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.563294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.563328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.563533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.563564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.563792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.563825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.564021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.564053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.564223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.564255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.564542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.564572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.564826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.564859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.565166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.565202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.768 [2024-12-06 11:29:16.565487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.768 [2024-12-06 11:29:16.565522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.768 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.565826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.565861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.566164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.566200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.566406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.566441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.566642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.566677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.566858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.566894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.567203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.567238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.567466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.567501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.567723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.567757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.568040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.568094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.568306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.568340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.568601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.568636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.568770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.568806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.569086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.569122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.569384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.569430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.569573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.569608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.569893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.569928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.570122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.570157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.570363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.570398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.570531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.570565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.570848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.570882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.571196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.571231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.571483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.571518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.571828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.571862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.572156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.572192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.572424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.572459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.572800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.572834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.573035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.573081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.573358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.573392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.573595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.573630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.573932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.573967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.574176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.574212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.574449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.769 [2024-12-06 11:29:16.574484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.769 qpair failed and we were unable to recover it. 00:27:43.769 [2024-12-06 11:29:16.574760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.574794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.575087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.575122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.575253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.575288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.575558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.575592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.575926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.575960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.576198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.576233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.576464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.576499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.576654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.576689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.576898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.576934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.577097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.577132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.577273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.577309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.577586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.577620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.577883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.577918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.578197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.578232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.578524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.578560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.578894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.578928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.579207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.579242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.579436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.579471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.579678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.579713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.579968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.580003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.580233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.580269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.580470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.580510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.580704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.580739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.580935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.580970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.581231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.581266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.581403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.581438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.581719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.581753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.582008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.582044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.582360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.582394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.582657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.582691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.583021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.583056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.583218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.583253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.583517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.583551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.583880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.583915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.584176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.584211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.584450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.584485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.770 [2024-12-06 11:29:16.584747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.770 [2024-12-06 11:29:16.584781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.770 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.584972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.585007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.585219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.585254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.585390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.585425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.585576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.585610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.585754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.585789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.586056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.586115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.586373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.586408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.586556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.586591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.586715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.586750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.586988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.587022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.587338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.587372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.587505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.587539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.587681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.587717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.587979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.588014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.588292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.588326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.588640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.588675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.588933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.588968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.589155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.589190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.589405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.589439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.589636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.589671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.589855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.589889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.590019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.590052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.590200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.590234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.590513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.590547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.590811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.590852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.590981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.591016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.591218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.591253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.591568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.591602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.591911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.591945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.592168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.592204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.592412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.592447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.592655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.592953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.592988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.593144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.593180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.593462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.593497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.593704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.771 [2024-12-06 11:29:16.593738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.771 qpair failed and we were unable to recover it. 00:27:43.771 [2024-12-06 11:29:16.593996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.594031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.594359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.594395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.594660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.594695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.594994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.595028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.595332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.595369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.595573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.595606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.595832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.595867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.596073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.596109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.596294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.596329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.596665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.596699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.596937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.596973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.597221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.597256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.597465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.597499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.597685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.597719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.597934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.597968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.598160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.598195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.598460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.598495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.598693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.598727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.598910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.598945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.599218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.599252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.599379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.599414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.599544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.599578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.599909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.599944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.600216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.600251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.600539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.600573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.600825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.600860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.601078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.601114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.601317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.601352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.601502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.601541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.601728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.601762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.602078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.602114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.602371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.602404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.602715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.602750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.602877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.602911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.603125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.603159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.603440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.603476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.772 [2024-12-06 11:29:16.603685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.772 [2024-12-06 11:29:16.603719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.772 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.603987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.604022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.604228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.604264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.604422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.604455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.604654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.604687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.604816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.604851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.605045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.605088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.605216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.605250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.605457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.605492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.605630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.605664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.605809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.605844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.606095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.606130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.606254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.606287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.606498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.606532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.606659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.606693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.607023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.607057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.607203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.607237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.607464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.607498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.607622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.607656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.607781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.607813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.608018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.608049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.608367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.608399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.608552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.608584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.608807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.608841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.609142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.609178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.609391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.609426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.609655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.609690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.609891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.609926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.610122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.610159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.610385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.610428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.610576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.610609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.610752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.610788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.611080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.611122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.611268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.773 [2024-12-06 11:29:16.611304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.773 qpair failed and we were unable to recover it. 00:27:43.773 [2024-12-06 11:29:16.611418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.611453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.611562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.611596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.611800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.611835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.612046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.612090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.612223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.612258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.612400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.612435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.612554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.612587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.612793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.612827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.612962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.612998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.613311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.613346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.613631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.613665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.613804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.613838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.614107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.614144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.614287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.614320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.614519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.614553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.614794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.614829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.615163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.615199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.615335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.615369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.615568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.615603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.615887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.615924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.616194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.616230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.616389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.616425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.616620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.616654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.616874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.616908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.617166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.617200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.617344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.617379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.617565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.617599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.617855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.617890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.618201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.618236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.618428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.618462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.618618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.618653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.618974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.619008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.619137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.619173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.619334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.619367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.619527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.619561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.619929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.619963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.774 qpair failed and we were unable to recover it. 00:27:43.774 [2024-12-06 11:29:16.620203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.774 [2024-12-06 11:29:16.620237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.620462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.620496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.620630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.620677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.620995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.621029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.621186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.621220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.621368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.621402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.621592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.621626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.621848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.621885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.622104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.622140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.622352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.622386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.622613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.622648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.622798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.622832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.622949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.622984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.623280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.623315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.623445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.623481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.623627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.623661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.623785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.623819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.623960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.623994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.624265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.624300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.624426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.624461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.624684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.624717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.624925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.624960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.625181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.625216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.625473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.625507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.625630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.625666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.625886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.625922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.626226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.626262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.626519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.626553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.626785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.626819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.627009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.627045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.627197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.627232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.627451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.627485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.627599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.627633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.627916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.627950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.628152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.628187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.628391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.628425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.628562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.775 [2024-12-06 11:29:16.628598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.775 qpair failed and we were unable to recover it. 00:27:43.775 [2024-12-06 11:29:16.628852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.628885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.629028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.629072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.629339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.629374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.629581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.629614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.629849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.629885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.630151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.630192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.630464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.630498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.630625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.630656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.630897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.630934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.631170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.631206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.631338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.631373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.633446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.633510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.633814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.633849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.634153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.634190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.635675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.635738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.636041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.636088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.636248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.636284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.636440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.636474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.636685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.636719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.636928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.636963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.637172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.637208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.637416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.637449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.637713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.637747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.638037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.638125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.638289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.638322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.638553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.638588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.638955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.638990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.639219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.639255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.639415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.639449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.639656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.639691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.639952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.639987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.640176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.640212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.640343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.640378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.640688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.640722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.641010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.641044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.641193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.641226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.776 [2024-12-06 11:29:16.641434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.776 [2024-12-06 11:29:16.641468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.776 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.641741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.641775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.641936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.641970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.642114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.642151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.642271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.642306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.642512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.642546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.642827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.642860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.643166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.643202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.643344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.643379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.643587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.643627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.643869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.643905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.644013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.644047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.644191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.644228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.644362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.644396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.644523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.644558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.644703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.644738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.644999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.645033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.645294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.645330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.645590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.645624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.645911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.645946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.646301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.646337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.646526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.646560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.646849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.646885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.647229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.647266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.647412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.647445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.647660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.647693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.647931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.647965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.648123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.648158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.648303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.648337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.648565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.648600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.648797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.648831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.648971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.649006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.649262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.649298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.649416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.649452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.649756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.649791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.650051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.650098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.650280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.777 [2024-12-06 11:29:16.650320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.777 qpair failed and we were unable to recover it. 00:27:43.777 [2024-12-06 11:29:16.650516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.650550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.650786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.650820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.651010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.651044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.651204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.651239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.651378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.651412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.651624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.651659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.651788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.651824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.652143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.652181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.652326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.652361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.652487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.652523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.652812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.652846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.653047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.653093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.653315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.653350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.653542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.653576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.653695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.653729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.654015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.654049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.654325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.654359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.654512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.654547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.654765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.654802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.654946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.654980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.655186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.655221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.655365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.655399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.655686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.655721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.655925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.655959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.656252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.656287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.656445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.656479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.656628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.656662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.657027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.657071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.657311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.657345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:43.778 [2024-12-06 11:29:16.657583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.778 [2024-12-06 11:29:16.657618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:43.778 qpair failed and we were unable to recover it. 00:27:44.056 [2024-12-06 11:29:16.657951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.056 [2024-12-06 11:29:16.657986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.056 qpair failed and we were unable to recover it. 00:27:44.056 [2024-12-06 11:29:16.658184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.056 [2024-12-06 11:29:16.658219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.056 qpair failed and we were unable to recover it. 00:27:44.056 [2024-12-06 11:29:16.658358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.056 [2024-12-06 11:29:16.658395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.056 qpair failed and we were unable to recover it. 00:27:44.056 [2024-12-06 11:29:16.658608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.056 [2024-12-06 11:29:16.658643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.056 qpair failed and we were unable to recover it. 00:27:44.056 [2024-12-06 11:29:16.658883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.056 [2024-12-06 11:29:16.658919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.056 qpair failed and we were unable to recover it. 00:27:44.056 [2024-12-06 11:29:16.659070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.056 [2024-12-06 11:29:16.659107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.056 qpair failed and we were unable to recover it. 00:27:44.056 [2024-12-06 11:29:16.659293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.056 [2024-12-06 11:29:16.659327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.659475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.659510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.659762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.659799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.659983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.660023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.660345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.660381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.660572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.660610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.660737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.660771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.661111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.661148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.661338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.661374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.661531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.661567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.661904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.661939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.662132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.662168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.662378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.662413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.662623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.662658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.662948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.662983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.663190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.663227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.663435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.663470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.663678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.663714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.663899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.663934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.664203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.664240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.664468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.664503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.664839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.664873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.665017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.665053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.665228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.665263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.665396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.665431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.667301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.667368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.667681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.667717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.667969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.668004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.668261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.668300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.668502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.668538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.668679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.668715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.668902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.668936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.669138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.057 [2024-12-06 11:29:16.669176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.057 qpair failed and we were unable to recover it. 00:27:44.057 [2024-12-06 11:29:16.669315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.669350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.669513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.669549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.669674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.669710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.669980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.670014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.670262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.670298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.670542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.670579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.670893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.670929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.671145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.671183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.671385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.671421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.671576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.671612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.671829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.671870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.672072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.672109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.672311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.672347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.672578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.672613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.672942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.672977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.673251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.673289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.673524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.673559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.673748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.673785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.673993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.674029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.674191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.674227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.674387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.674423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.674663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.674697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.674983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.675018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.675147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.675183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.675316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.675350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.675587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.675622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.675837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.675874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.676111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.676148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.676288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.676324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.676529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.676565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.676766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.676802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.677000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.058 [2024-12-06 11:29:16.677036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.058 qpair failed and we were unable to recover it. 00:27:44.058 [2024-12-06 11:29:16.677265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.677300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.677492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.677527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.677730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.677764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.678097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.678134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.678346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.678383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.678582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.678618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.678869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.678904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.679132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.679168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.679411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.679446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.679644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.679679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.679869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.679904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.680127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.680163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.680399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.680434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.680638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.680674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.680901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.680938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.681156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.681192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.681349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.681386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.681586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.681621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.681737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.681777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.682077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.682114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.682377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.682411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.682626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.682662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.682801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.682837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.682983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.683018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.683304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.683340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.683549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.683586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.683772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.683806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.683948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.683983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.684171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.684208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.684467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.684503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.684645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.684679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.684940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.684975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.685210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.685247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.059 qpair failed and we were unable to recover it. 00:27:44.059 [2024-12-06 11:29:16.685386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.059 [2024-12-06 11:29:16.685420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.685680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.685716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.686004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.686039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.686325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.686362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.686575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.686610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.686818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.686853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.687124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.687161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.687399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.687434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.687691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.687726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.688039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.688086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.688288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.688322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.688461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.688497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.688698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.688735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.688967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.689001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.689185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.689224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.689504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.689539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.689684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.689722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.689943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.689978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.690249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.690285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.690474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.690508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.690727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.690762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.691047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.691093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.691242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.691276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.691469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.691504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.691716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.691751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.692022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.692072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.692230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.692265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.692400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.692434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.692563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.692597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.692818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.692852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.693137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.693175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.693416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.693452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.060 qpair failed and we were unable to recover it. 00:27:44.060 [2024-12-06 11:29:16.693592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.060 [2024-12-06 11:29:16.693628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.693813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.693849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.693959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.693994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.694124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.694160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.694368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.694404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.694731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.694765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.694893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.694929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.695151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.695187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.695407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.695443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.695585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.695620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.695906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.695941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.696085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.696121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.696278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.696314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.696597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.696632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.696749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.696784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.697000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.697036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.697274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.697308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.697468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.697502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.697687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.697721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.697833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.697868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.698178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.698217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.698344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.698377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.698604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.698638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.698848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.698884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.699082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.699133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.699260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.699296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.699430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.699466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.699688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.699723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.699993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.700029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.700333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.700369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.700575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.700610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.700814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.700849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.061 [2024-12-06 11:29:16.701079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.061 [2024-12-06 11:29:16.701116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.061 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.701311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.701353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.701491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.701526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.701807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.701842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.702031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.702078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.702334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.702368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.702551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.702586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.702886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.702920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.703148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.703185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.703400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.703435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.703591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.703627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.703844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.703878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.704152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.704188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.704424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.704458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.704647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.704682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.704830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.704864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.704999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.705034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.705178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.705214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.705425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.705461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.705669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.705703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.705900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.705935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.706133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.706169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.706386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.706422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.706637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.706672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.706786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.706821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.706944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.706978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.707130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.707165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.707349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.707385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.707553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.707588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.707890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.707926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.062 qpair failed and we were unable to recover it. 00:27:44.062 [2024-12-06 11:29:16.708171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.062 [2024-12-06 11:29:16.708208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.708512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.708547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.708707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.708743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.709016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.709052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.709321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.709356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.709556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.709591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.709880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.709916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.710195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.710232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.710462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.710498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.710660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.710698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.710965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.711000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.711252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.711294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.711492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.711525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.711652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.711686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.711895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.711930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.712164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.712199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.712466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.712500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.712775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.712810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.713015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.713049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.713315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.713350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.713557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.713593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.713727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.713761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.713898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.713934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.714211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.714247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.715868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.715926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.716170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.716206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.716358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.716393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.716548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.716584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.716882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.716917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.717109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.717146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.717422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.717458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.717577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.717610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.718014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.718048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.718238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.063 [2024-12-06 11:29:16.718275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.063 qpair failed and we were unable to recover it. 00:27:44.063 [2024-12-06 11:29:16.718509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.718544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.718776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.718810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.719079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.719116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.719336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.719371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.719528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.719563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.719882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.719916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.720534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.720570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.720923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.720958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.721116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.721152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.721380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.721414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.721657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.721692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.721967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.722003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.722301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.722337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.722574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.722611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.722864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.722900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.723170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.723205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.723341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.723376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.723509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.723550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.723745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.723779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.723928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.723963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.724233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.724270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.724586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.724621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.724848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.724884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.725105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.725140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.725347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.725384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.725528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.725564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.725706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.725740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.725998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.726033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.726189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.726224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.726502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.726537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.726845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.726880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.727088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.064 [2024-12-06 11:29:16.727125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.064 qpair failed and we were unable to recover it. 00:27:44.064 [2024-12-06 11:29:16.727279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.727313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.727501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.727535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.727748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.727783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.727971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.728004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.728265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.728300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.728425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.728459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.728697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.728732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.728938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.728973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.729239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.729276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.729557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.729592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.729923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.729957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.730158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.730193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.730356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.730392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.730715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.730749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.731035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.731078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.731222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.731256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.731372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.731407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.731559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.731593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.731826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.731860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.732146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.732182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.732381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.732417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.732626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.732661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.732974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.733008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.733178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.733213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.733421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.733456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.733646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.733687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.733812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.733846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.734028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.734075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.735146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.735205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.735467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.735508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.735847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.735890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.736189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.736227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.736557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.736593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.065 qpair failed and we were unable to recover it. 00:27:44.065 [2024-12-06 11:29:16.736890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.065 [2024-12-06 11:29:16.736929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.737228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.737265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.737482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.737518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.737863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.737898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.738203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.738239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.738446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.738481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.738640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.738674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.738866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.738902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.739119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.739154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.739283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.739317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.739551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.739586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.739728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.739762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.740055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.740120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.740267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.740301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.740509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.740545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.740815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.740850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.741139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.741175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.741335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.741371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.741577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.741611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.741859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.741894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.742038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.742082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.742308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.742342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.742462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.742497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.742705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.742740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.742873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.742908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.743036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.743102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.743302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.743336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.743597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.743632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.743758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.743792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.743977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.744012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.744226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.744262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.744401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.744434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.066 [2024-12-06 11:29:16.744643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.066 [2024-12-06 11:29:16.744683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.066 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.744826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.744860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.745081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.745118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.745262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.745296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.745489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.745524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.745722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.745757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.745942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.745977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.746199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.746235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.746473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.746507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.746722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.746757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.746888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.746922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.747138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.747176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.747364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.747399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.747666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.747701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.747848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.747884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.748029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.748073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.748207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.748242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.748363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.748397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.748586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.748621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.748848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.748884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.749016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.749050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.749180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.749214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.749419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.749454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.749570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.749604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.749890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.749924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.751110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.751169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.751412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.751448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.751646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.751683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.751876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.751911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.067 [2024-12-06 11:29:16.752197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.067 [2024-12-06 11:29:16.752235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.067 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.752501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.752536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.752717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.752752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.752878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.752913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.753251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.753286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.753573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.753607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.753750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.753785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.753975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.754010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.754166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.754201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.754340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.754375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.754521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.754556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.754754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.754796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.755005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.755041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.755287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.755321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.755539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.755574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.755715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.755749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.756005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.756039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.756177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.756211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.756337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.756371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.756495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.756531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.756667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.756701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.756820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.756855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.757085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.757121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.757415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.757450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.757647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.757681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.757837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.757871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.758081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.758117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.758333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.758367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.758502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.758535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.758727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.758762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.758890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.758924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.759121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.759155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.068 [2024-12-06 11:29:16.760657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.068 [2024-12-06 11:29:16.760723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.068 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.760863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.760900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.761187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.761223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.761413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.761448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.761665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.761699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.761812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.761846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.762087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.762122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.762355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.762389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.762594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.762628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.762846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.762881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.763082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.763116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.763322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.763356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.763557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.763591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.763728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.763761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.764074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.764110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.764389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.764423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.764624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.764657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.764881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.764915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.765198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.765234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.765433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.765472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.765652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.765686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.765821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.765855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.766131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.766166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.766311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.766344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.766537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.766571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.766695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.766727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.766930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.766963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.767214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.767250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.767379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.767412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.767670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.767704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.767828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.767862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.768132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.768169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.768352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.069 [2024-12-06 11:29:16.768385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.069 qpair failed and we were unable to recover it. 00:27:44.069 [2024-12-06 11:29:16.768526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.768560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.768690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.768724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.768953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.768987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.769192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.769227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.769415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.769449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.769682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.769715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.769861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.769896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.770012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.770046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.770174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.770208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.770327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.770360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.770542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.770577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.770848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.770881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.771091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.771125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.771337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.771372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.771501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.771535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.771784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.771818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.771946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.771980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.772193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.772228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.772451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.772484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.772599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.772632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.772832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.772865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.772975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.773009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.773215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.773250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.773444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.773478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.773729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.773763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.773948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.773982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.774178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.774219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.774345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.774378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.774512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.774546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.774751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.774785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.774927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.774961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.775196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.775230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.775356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.070 [2024-12-06 11:29:16.775390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.070 qpair failed and we were unable to recover it. 00:27:44.070 [2024-12-06 11:29:16.775525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.775560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.775693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.775726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.775915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.775948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.776233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.776269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.776508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.776542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.776660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.776694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.776804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.776838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.776973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.777006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.777144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.777179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.777368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.777401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.777509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.777542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.777679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.777712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.777912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.777946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.778054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.778099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.778281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.778314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.778501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.778535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.778717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.778750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.778943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.778976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.779116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.779151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.779282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.779315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.779512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.779551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.779746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.779780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.779897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.779930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.780055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.780099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.780225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.780259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.780479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.780513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.780702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.780736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.780916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.780950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.781076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.781111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.781239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.781272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.781481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.781514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.781642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.781675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.071 [2024-12-06 11:29:16.781853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.071 [2024-12-06 11:29:16.781887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.071 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.782013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.782046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.782246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.782279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.782414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.782447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.782556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.782590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.782721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.782754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.782873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.782906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.783019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.783052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.783210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.783244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.783361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.783395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.783619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.783652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.783959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.783992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.784224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.784260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.784524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.784558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.784768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.784802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.785076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.785113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.785246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.785280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.785493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.785527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.785664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.785697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.785997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.786031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.786286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.786320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.786537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.786570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.786798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.786832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.786945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.786978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.787220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.787254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.787477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.787510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.787635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.787668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.787919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.787952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.788233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.788274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.788504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.072 [2024-12-06 11:29:16.788538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.072 qpair failed and we were unable to recover it. 00:27:44.072 [2024-12-06 11:29:16.788687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.788722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.789002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.789035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.789215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.789250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.789451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.789485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.789677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.789709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.789961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.789994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.790206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.790241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.790445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.790480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.790751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.790783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.791048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.791091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.791243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.791277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.791421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.791455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.791642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.791675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.791871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.791904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.792016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.792050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.792300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.792333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.792475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.792505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.792641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.792672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.792785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.792818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.793111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.793147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.793299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.793333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.793456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.793490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.793687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.793730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.793877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.793910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.794123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.794158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.794312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.794346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.794544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.794577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.794904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.794938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.795148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.795183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.795385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.795419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.795616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.795650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.795936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.795969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.796196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.073 [2024-12-06 11:29:16.796230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.073 qpair failed and we were unable to recover it. 00:27:44.073 [2024-12-06 11:29:16.796481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.796514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.796787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.796822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.797029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.797072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.797282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.797315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.797444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.797478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.797611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.797650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.797838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.797872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.798075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.798110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.798312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.798346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.798471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.798506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.798802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.798836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.799125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.799160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.799325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.799359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.799564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.799598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.799818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.799852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.800078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.800113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.800236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.800270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.800569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.800602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.800735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.800769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.800960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.800995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.801225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.801260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.801465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.801499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.801718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.801751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.802077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.802112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.802373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.802407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.802659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.802693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.803016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.803049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.803184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.803217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.803473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.803507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.803718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.803752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.803934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.803968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.804157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.804192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.804344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.804378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.804580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.074 [2024-12-06 11:29:16.804614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.074 qpair failed and we were unable to recover it. 00:27:44.074 [2024-12-06 11:29:16.804954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.804988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.805199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.805235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.805514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.805547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.805818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.805851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.806153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.806188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.806397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.806431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.806628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.806661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.806940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.806974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.807201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.807235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.807372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.807408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.807547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.807582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.807840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.807880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.808083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.808130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.808275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.808309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.808513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.808547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.808675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.808709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.808918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.808952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.809154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.809188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.809373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.809407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.809621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.809654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.809857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.809891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.810092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.810127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.810336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.810370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.810566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.810599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.810817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.810855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.811131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.811168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.811378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.811413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.811685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.811720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.811943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.811978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.812167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.812202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.812419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.812454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.812734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.812769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.812951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.812986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.075 qpair failed and we were unable to recover it. 00:27:44.075 [2024-12-06 11:29:16.813180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.075 [2024-12-06 11:29:16.813216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.813427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.813461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.813722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.813757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.813969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.814004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.814173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.814209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.814425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.814460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.814671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.814706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.814997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.815032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.815183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.815218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.815374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.815408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.815547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.815582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.815847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.815882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.816007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.816042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.816278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.816313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.816435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.816469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.816628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.816663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.816878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.816913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.817055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.817101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.817366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.817407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.817556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.817591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.817736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.817770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.817987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.818021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.818312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.818348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.818497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.818532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.818683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.818718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.818969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.819004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.819207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.819244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.819397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.819431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.819559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.819593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.819896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.819931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.820267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.820303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.820511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.820546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.820822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.820857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.821179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.076 [2024-12-06 11:29:16.821215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.076 qpair failed and we were unable to recover it. 00:27:44.076 [2024-12-06 11:29:16.821449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.821484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.821628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.821663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.821889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.821923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.822049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.822108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.822366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.822400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.822655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.822689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.822873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.822907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.823223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.823258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.823412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.823446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.823633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.823667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.823818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.823852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.824172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.824208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.824464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.824498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.824787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.824822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.825039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.825081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.825289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.825324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.825611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.825645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.825995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.826029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.826261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.826295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.826519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.826553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.826888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.826921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.827117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.827151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.827461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.827496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.827787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.827820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.828120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.828162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.828374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.828408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.828667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.828700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.828910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.828945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.829207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.829243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.077 [2024-12-06 11:29:16.829377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.077 [2024-12-06 11:29:16.829412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.077 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.829711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.829746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.830038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.830090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.830329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.830365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.830486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.830522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.830764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.830799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.830982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.831017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.831314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.831350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.831613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.831647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.831974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.832009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.832252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.832286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.832506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.832542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.832787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.832821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.833016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.833051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.833204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.833238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.833426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.833461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.833592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.833627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.833879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.833915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.834140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.834176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.834457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.834491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.834783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.834818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.835094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.835130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.835347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.835381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.835500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.835535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.835657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.835691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.835968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.836003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.836239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.836275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.836461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.836496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.836628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.836662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.836935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.836969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.837156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.837192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.837481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.837515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.837729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.837764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.837888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.078 [2024-12-06 11:29:16.837923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.078 qpair failed and we were unable to recover it. 00:27:44.078 [2024-12-06 11:29:16.838208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.838244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.838470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.838511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.838662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.838697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.838965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.839000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.839323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.839359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.839548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.839583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.839795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.839830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.840020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.840054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.840327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.840363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.840598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.840633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.840870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.840905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.841095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.841130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.841268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.841303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.841454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.841489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.841712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.841747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.841942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.841976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.842186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.842223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.842422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.842457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.842664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.842699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.842816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.842850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.843174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.843209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.843403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.843436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.843590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.843624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.843835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.843869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.844128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.844163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.844307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.844341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.844484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.844517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.844704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.844738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.844948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.844983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.845312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.845347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.845552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.845586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.845807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.845841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.845955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.845987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.846143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.079 [2024-12-06 11:29:16.846180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.079 qpair failed and we were unable to recover it. 00:27:44.079 [2024-12-06 11:29:16.846387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.846421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.846630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.846664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.846970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.847004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.847334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.847368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.847571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.847605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.847916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.847951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.848223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.848259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.848548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.848591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.848924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.848959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.849224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.849259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.849482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.849517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.849797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.849832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.850036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.850082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.850210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.850244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.850530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.850564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.850712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.850746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.851004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.851038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.851201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.851235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.851492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.851526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.851766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.851800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.852067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.852102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.852378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.852413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.852720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.852754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.852977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.853010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.853246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.853281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.853417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.853450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.853712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.853745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.854002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.854036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.854286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.854323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.854637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.854672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.854810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.854845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.855035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.855091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.855223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.855256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.080 [2024-12-06 11:29:16.855516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.080 [2024-12-06 11:29:16.855551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.080 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.855884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.855920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.856137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.856172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.856298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.856331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.856613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.856646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.856932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.856966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.857084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.857119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.857306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.857339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.857461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.857495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.857698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.857732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.858012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.858046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.858340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.858374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.858671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.858704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.858813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.858847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.859145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.859186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.859413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.859449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.859601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.859634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.859832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.859866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.860084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.860119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.860335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.860368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.860569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.860603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.860883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.860920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.861214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.861250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.861383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.861418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.861728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.861762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.861954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.861990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.862255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.862291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.862604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.862639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.862917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.862952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.863188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.863225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.863347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.863384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.863607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.863643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.863965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.864001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.864175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.864210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.081 [2024-12-06 11:29:16.864415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.081 [2024-12-06 11:29:16.864449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.081 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.864710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.864744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.865018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.865052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.865204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.865239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.865378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.865413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.865676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.865709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.865891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.865926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.866206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.866242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.866444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.866478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.866707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.866744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.867003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.867038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.867196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.867231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.867440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.867475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.867767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.867803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.868072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.868109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.868252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.868286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.868425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.868458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.868790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.868824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.869029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.869071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.869209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.869243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.869475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.869516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.869642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.869677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.869882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.869915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.870150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.870186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.870389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.870424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.870711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.870745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.871031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.871089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.871298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.871331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.871485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.871520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.871762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.082 [2024-12-06 11:29:16.871798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.082 qpair failed and we were unable to recover it. 00:27:44.082 [2024-12-06 11:29:16.872094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.872129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.872268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.872302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.872503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.872536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.872730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.872764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.873030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.873074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.873368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.873402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.873610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.873645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.873897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.873932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.874197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.874234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.874449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.874485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.874707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.874742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.875036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.875081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.875290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.875326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.875440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.875476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.875715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.875750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.875985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.876020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.876271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.876307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.876461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.876496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.876683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.876718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.876933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.876968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.877109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.877145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.877341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.877376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.877696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.877731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.877980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.878013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.878142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.878177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.878359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.878393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.878735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.878769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.878957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.878993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.879228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.879264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.879473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.879508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.879667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.879709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.879847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.879881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.880200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.880237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.083 [2024-12-06 11:29:16.880381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.083 [2024-12-06 11:29:16.880415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.083 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.880531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.880566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.880683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.880718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.880978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.881013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.881331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.881366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.881578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.881613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.881897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.881932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.882225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.882259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.882470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.882504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.882647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.882682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.882989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.883023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.883295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.883330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.883466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.883500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.883656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.883691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.883985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.884020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.884229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.884265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.884473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.884507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.884644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.884678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.884976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.885011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.885242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.885280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.885542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.885578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.885967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.886003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.886211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.886246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.886387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.886422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.886673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.886753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.886996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.887035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.887216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.887253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.887532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.887569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.887758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.887793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.888047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.888094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.888296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.888330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.888515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.888549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.888819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.888856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.889147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.889182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.084 [2024-12-06 11:29:16.889314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.084 [2024-12-06 11:29:16.889349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.084 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.889486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.889521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.889731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.889766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.889883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.889927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.890148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.890186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.890514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.890550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.890832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.890867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.891051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.891112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.891333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.891368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.891515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.891549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.891744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.891778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.892051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.892097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.892224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.892260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.892394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.892430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.894387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.894451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.894774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.894812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.894957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.894991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.895210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.895248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.895469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.895505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.895766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.895800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.896000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.896036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.896246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.896281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.896491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.896526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.896785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.896822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.897030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.897075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.897225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.897259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.897390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.897425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.897630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.897666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.897878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.897912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.898091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.898127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.898283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.898318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.898535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.898571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.898705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.898740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.899028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.899076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.085 qpair failed and we were unable to recover it. 00:27:44.085 [2024-12-06 11:29:16.900559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.085 [2024-12-06 11:29:16.900619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.900935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.900972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.901196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.901232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.901482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.901516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.901789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.901825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.902035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.902084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.902223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.902257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.902466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.902501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.902644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.902677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.902822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.902864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.903155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.903190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.903428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.903464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.903775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.903811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.903957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.903991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.904232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.904268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.904563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.904598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.904790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.904825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.905089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.905125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.905333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.905369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.905582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.905617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.905862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.905896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.906102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.906153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.906365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.906401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.906544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.906579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.906785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.906821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.907022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.907084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.907312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.907347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.907556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.907591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.907940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.907975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.908197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.908234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.908499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.908536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.908662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.908698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.908959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.908993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.909207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.086 [2024-12-06 11:29:16.909244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.086 qpair failed and we were unable to recover it. 00:27:44.086 [2024-12-06 11:29:16.909457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.909493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.909695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.909730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.910043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.910092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.910229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.910264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.910426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.910460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.910762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.910797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.910919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.910955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.911143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.911180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.911467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.911502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.911732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.911768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.912086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.912125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.912284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.912321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.912581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.912616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.912871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.912905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.913192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.913228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.913433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.913467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.913745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.913782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.913986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.914022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.914197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.914234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.914523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.914559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.914776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.914812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.914947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.914982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.915220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.915258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.915478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.915513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.915720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.915756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.915956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.915991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.916247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.916283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.916525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.916562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.916790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.916825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.917170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.917207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.917492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.087 [2024-12-06 11:29:16.917527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.087 qpair failed and we were unable to recover it. 00:27:44.087 [2024-12-06 11:29:16.917724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.917761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.918041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.918083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.918236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.918271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.918415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.918452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.918618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.918654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.918938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.918974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.919168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.919204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.919394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.919428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.919614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.919650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.919877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.919912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.920127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.920164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.920427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.920468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.920599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.920634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.920957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.920992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.921226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.921264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.921468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.921504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.921741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.921775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.922038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.922086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.922384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.922418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.922613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.922649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.922787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.922823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.923126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.923164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.923424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.923458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.923701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.923736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.924000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.924034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.924361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.924397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.924612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.924646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.924942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.924977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.925184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.925219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.925430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.925465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.925723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.925759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.925964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.088 [2024-12-06 11:29:16.926000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.088 qpair failed and we were unable to recover it. 00:27:44.088 [2024-12-06 11:29:16.926279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.926315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.926464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.926498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.926698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.926734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.926938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.926973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.927189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.927227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.927360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.927397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.927630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.927666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.927876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.927913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.928050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.928097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.928251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.928287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.928484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.928519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.928732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.928768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.928964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.928999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.929219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.929256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.929391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.929427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.929661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.929696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.929894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.929929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.930145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.930182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.930324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.930358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.930618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.930659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.930978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.931013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.931181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.931218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.931430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.931466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.931674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.931710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.931935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.931970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.932182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.932219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.932437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.932473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.932606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.932641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.932851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.932885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.933084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.933119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.933321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.933355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.933628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.933664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.933817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.933852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.089 [2024-12-06 11:29:16.934055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.089 [2024-12-06 11:29:16.934104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.089 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.934265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.934301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.934443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.934477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.934613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.934647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.934838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.934873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.935067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.935104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.935410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.935444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.935634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.935670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.935959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.935995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.936305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.936342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.936498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.936534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.936677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.936713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.936998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.937034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.937336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.937374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.937506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.937547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.937681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.937716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.938008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.938044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.938191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.938228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.938523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.938559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.938870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.938906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.939145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.939183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.939401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.939437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.939649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.939685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.939870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.939905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.940101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.940136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.940421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.940456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.940606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.940647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.940944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.940979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.941271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.941307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.941442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.941478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.941710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.941745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.941956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.941991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.942192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.942229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.942533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.090 [2024-12-06 11:29:16.942569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.090 [2024-12-06 11:29:16.942887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.942920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.943119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.943155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.943348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.943385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.943514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.943550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.943685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.943721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.943983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.944019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.944106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2e540 (9): Bad file descriptor 00:27:44.091 [2024-12-06 11:29:16.944496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.944550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.944911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.944949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.945240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.945280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.945486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.945522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.945793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.945827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.945956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.945990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.946186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.946222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.946432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.946467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.946598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.946632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.946825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.946859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.947078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.947115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.947376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.947411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.947649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.947682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.947897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.947932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.948120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.948156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.948280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.948313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.948597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.948633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.948858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.948891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.949145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.949180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.949434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.949468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.949649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.949684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.949905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.949940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.950214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.950250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.950511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.950545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.950756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.950791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.951069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.951105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.951337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.091 [2024-12-06 11:29:16.951378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.091 qpair failed and we were unable to recover it. 00:27:44.091 [2024-12-06 11:29:16.951636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.951670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.951884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.951918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.952109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.952145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.952344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.952379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.952673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.952707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.952945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.952979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.953182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.953218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.953378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.953413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.953623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.953658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.953962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.953996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.954280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.954315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.954531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.954566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.954767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.954801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.955021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.955056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.955213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.955249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.955434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.955469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.955604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.955638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.955851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.955885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.956079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.956115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.956310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.956344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.956624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.956658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.956951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.956985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.957189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.957225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.957428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.957463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.957780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.957814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.958123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.958158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.958445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.958479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.958761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.958797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.959082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.959117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.959322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.959356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.959492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.959526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.959825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.959859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.959983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.960016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.960234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.960270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.092 [2024-12-06 11:29:16.960605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.092 [2024-12-06 11:29:16.960638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.092 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.960922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.960955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.961187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.961223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.961443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.961478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.961790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.961824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.962099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.962134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.962430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.962466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.962774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.962807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.963088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.963123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.963259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.963294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.963499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.963532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.963794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.963829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.964034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.964094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.964305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.964340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.964624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.964659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.964949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.964983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.965261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.965298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.965485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.965520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.965705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.965740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.966026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.966068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.966340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.966375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.966608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.966642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.966901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.966935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.967075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.967111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.967371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.967405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.967535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.967570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.967910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.967944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.968182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.968218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.968462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.968497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.968810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.968844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.093 qpair failed and we were unable to recover it. 00:27:44.093 [2024-12-06 11:29:16.969050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.093 [2024-12-06 11:29:16.969096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.969365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.969399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.969688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.969723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.970016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.970055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.970349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.970384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.970655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.970688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.970962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.970996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.971295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.971330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.971526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.971561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.971843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.971878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.972163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.972198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.972486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.972522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.972787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.972822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.973126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.973162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.973428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.973462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.973663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.973697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.973821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.973855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.974073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.094 [2024-12-06 11:29:16.974108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.094 qpair failed and we were unable to recover it. 00:27:44.094 [2024-12-06 11:29:16.974426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.371 [2024-12-06 11:29:16.974461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.371 qpair failed and we were unable to recover it. 00:27:44.371 [2024-12-06 11:29:16.974670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.371 [2024-12-06 11:29:16.974704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.371 qpair failed and we were unable to recover it. 00:27:44.371 [2024-12-06 11:29:16.975000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.371 [2024-12-06 11:29:16.975034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.371 qpair failed and we were unable to recover it. 00:27:44.371 [2024-12-06 11:29:16.975343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.371 [2024-12-06 11:29:16.975378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.371 qpair failed and we were unable to recover it. 00:27:44.371 [2024-12-06 11:29:16.975657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.371 [2024-12-06 11:29:16.975690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.371 qpair failed and we were unable to recover it. 00:27:44.371 [2024-12-06 11:29:16.975978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.976013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.976243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.976279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.976562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.976598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.976906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.976941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.977233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.977269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.977549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.977583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.977788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.977822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.978037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.978084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.978354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.978389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.978580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.978613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.978904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.978938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.979180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.979214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.979483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.979517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.979802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.979837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.980021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.980054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.980336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.980370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.980564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.980599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.980861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.980895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.981120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.981156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.981402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.981437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.981641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.981675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.981933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.981973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.982162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.982198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.982482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.982516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.982779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.982814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.983105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.983141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.983419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.983453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.983579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.983613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.983738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.983773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.984034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.984092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.984291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.984325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.984610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.984644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.984955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.372 [2024-12-06 11:29:16.984990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.372 qpair failed and we were unable to recover it. 00:27:44.372 [2024-12-06 11:29:16.985277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.985312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.985541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.985576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.985788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.985824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.986111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.986146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.986337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.986372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.986687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.986721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.986924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.986959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.987150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.987186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.987473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.987508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.987793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.987827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.988109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.988145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.988343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.988378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.988668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.988702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.988827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.988861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.989146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.989181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.989387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.989428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.989552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.989587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.989722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.989755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.989968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.990003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.990306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.990342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.990607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.990641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.990869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.990904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.991175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.991210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.991441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.991476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.991759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.991793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.992093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.992130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.992405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.992445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.992737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.992771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.992882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.992916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.993128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.993164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.993306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.993340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.993622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.993656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.993971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.994006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.373 [2024-12-06 11:29:16.994293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.373 [2024-12-06 11:29:16.994328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.373 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.994610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.994645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.994838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.994873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.995202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.995237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.995449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.995484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.995752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.995786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.996091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.996127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.996389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.996424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.996653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.996688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.996909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.996943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.997239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.997276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.997568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.997603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.997878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.997913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.998201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.998236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.998492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.998527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.998719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.998754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.999014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.999049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.999334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.999368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.999588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.999623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:16.999924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:16.999959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.000228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.000264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.000528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.000563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.000857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.000892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.001096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.001145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.001331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.001365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.001656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.001690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.001967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.002001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.002292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.002328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.002446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.002482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.002770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.002804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.003072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.003108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.003378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.003412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.003704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.003739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.004037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.004094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.374 [2024-12-06 11:29:17.004399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.374 [2024-12-06 11:29:17.004434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.374 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.004721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.004756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.004974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.005008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.005212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.005249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.005400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.005435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.005573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.005608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.005895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.005930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.006212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.006248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.006460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.006494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.006677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.006711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.007023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.007068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.007346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.007380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.007690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.007725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.007988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.008022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.008178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.008213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.008473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.008508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.008807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.008846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.009045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.009090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.009376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.009411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.009626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.009660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.009781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.009816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.010015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.010051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.010373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.010409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.010697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.010730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.011011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.011046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.011336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.011371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.011605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.011640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.011836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.011872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.012185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.012222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.012500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.012536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.012856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.012892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.013175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.013212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.013517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.013553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.013832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.013866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.375 [2024-12-06 11:29:17.014072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.375 [2024-12-06 11:29:17.014107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.375 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.014293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.014327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.014589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.014624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.014807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.014844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.015128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.015163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.015447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.015482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.015769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.015803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.016098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.016134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.016408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.016442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.016748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.016782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.017029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.017075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.017369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.017403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.017609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.017644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.017929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.017963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.018292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.018329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.018538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.018573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.018892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.018926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.019151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.019187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.019525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.019559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.019756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.019789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.020112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.020148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.020414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.020449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.020637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.020672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.020955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.020996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.021321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.021356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.021638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.021672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.021874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.021909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.022093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.022129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.022419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.022453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.022741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.022777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.023055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.023100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.023300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.023335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.376 [2024-12-06 11:29:17.023594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.376 [2024-12-06 11:29:17.023629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.376 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.023872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.023907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.024165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.024201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.024501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.024537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.024719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.024754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.024964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.024998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.025284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.025320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.025602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.025636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.025898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.025931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.026219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.026255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.026539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.026575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.026857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.026890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.027184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.027220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.027496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.027531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.027852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.027887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.028174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.028210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.028449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.028484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.028680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.028714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.029001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.029041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.029309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.029344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.029544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.029578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.029864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.029898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.030178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.030212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.030504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.030539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.030724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.030759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.030970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.031005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.031277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.031313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.031607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.377 [2024-12-06 11:29:17.031641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.377 qpair failed and we were unable to recover it. 00:27:44.377 [2024-12-06 11:29:17.031856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.031890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.032104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.032139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.032399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.032434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.032675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.032710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.033028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.033073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.033365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.033400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.033715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.033749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.033939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.033972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.034270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.034305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.034570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.034605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.034872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.034907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.035190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.035226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.035531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.035566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.035851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.035885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.036197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.036233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.036519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.036555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.036839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.036874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.037179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.037214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.037509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.037545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.037749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.037784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.038047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.038095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.038363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.038397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.038549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.038583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.038869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.038904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.039027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.039074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.039266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.039300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.039585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.039620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.039847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.039882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.040169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.040203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.040501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.040536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.040665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.040699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.041012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.041052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.378 [2024-12-06 11:29:17.041321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.378 [2024-12-06 11:29:17.041356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.378 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.041645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.041680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.041870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.041905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.042165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.042201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.042483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.042518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.042855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.042889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.043213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.043248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.043553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.043588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.043870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.043904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.044127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.044163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.044302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.044337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.044621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.044657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.044918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.044952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.045212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.045249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.045512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.045546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.045838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.045873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.046180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.046215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.046498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.046533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.046804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.046839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.047145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.047181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.047421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.047456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.047773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.047808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.048102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.048138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.048413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.048449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.048728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.048763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.049050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.049109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.049297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.049332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.049553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.049589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.049864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.049897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.050133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.050169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.050429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.050464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.050727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.050761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.051049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.051098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.051385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.379 [2024-12-06 11:29:17.051421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.379 qpair failed and we were unable to recover it. 00:27:44.379 [2024-12-06 11:29:17.051630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.051665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.051977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.052013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.052327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.052364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.052655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.052690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.052898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.052933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.053140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.053178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.053391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.053426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.053637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.053672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.053796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.053830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.054088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.054123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.054414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.054448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.054749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.054784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.055052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.055097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.055385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.055420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.055562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.055597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.055854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.055889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.056093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.056129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.056319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.056355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.056589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.056624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.056828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.056863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.057090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.057128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.057332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.057366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.057477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.057512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.057795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.057829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.058114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.058150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.058368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.058403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.058713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.058747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.058949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.058984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.059105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.059141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.059427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.059462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.059775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.059809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.060012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.060047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.060186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.060220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.060529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.060575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.380 qpair failed and we were unable to recover it. 00:27:44.380 [2024-12-06 11:29:17.060848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.380 [2024-12-06 11:29:17.060882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.061170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.061205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.061489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.061524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.061760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.061795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.062004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.062038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.062313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.062347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.062609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.062643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.062928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.062962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.063249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.063285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.063566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.063601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.063889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.063923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.064205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.064242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.064524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.064558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.064845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.064880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.065164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.065201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.065484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.065519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.065734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.065769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.065955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.065990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.066192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.066225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.066493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.066527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.066804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.066838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.067037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.067084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.067579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.067620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.067847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.067882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.068173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.068210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.068554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.068592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.068855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.068891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.069191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.069227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.069424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.069459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.069727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.069760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.069951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.069986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.070249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.070284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.070567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.070602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.070814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.381 [2024-12-06 11:29:17.070847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.381 qpair failed and we were unable to recover it. 00:27:44.381 [2024-12-06 11:29:17.070993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.071029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.071313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.071347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.071531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.071565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.071824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.071858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.072117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.072153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.072361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.072396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.072586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.072621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.072871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.072906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.073240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.073276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.073546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.073580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.073805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.073840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.074154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.074190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.074312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.074346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.074629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.074664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.074975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.075010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.075280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.075316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.075606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.075641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.075842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.075877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.076172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.076207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.076434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.076470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.382 [2024-12-06 11:29:17.076745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.382 [2024-12-06 11:29:17.076780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.382 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.077076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.077112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.077384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.077418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.077729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.077764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.077887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.077921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.078187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.078222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.078513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.078548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.078794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.078828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.079066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.079103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.079391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.079425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.079568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.079602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.079794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.079828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.080118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.080154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.080456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.080497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.080777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.080811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.081079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.081114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.081370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.081405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.081616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.081650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.081858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.081893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.082183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.082218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.082479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.082515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.082813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.082846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.083118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.083154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.083449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.083483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.083609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.083644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.083877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.083911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.383 [2024-12-06 11:29:17.084034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.383 [2024-12-06 11:29:17.084082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.383 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.084394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.084427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.084710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.084746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.084880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.084914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.085122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.085158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.085287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.085321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.085439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.085474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.085762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.085798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.086002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.086035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.086239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.086273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.086500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.086535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.086738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.086772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.086955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.086990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.087203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.087239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.087422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.087457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.087750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.087785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.087900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.087931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.088216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.088252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.088511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.088546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.088771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.088806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.089086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.089123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.089428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.089461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.089784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.089818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.090108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.090144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.090337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.090372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.090577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.090612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.384 qpair failed and we were unable to recover it. 00:27:44.384 [2024-12-06 11:29:17.090897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.384 [2024-12-06 11:29:17.090931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.091240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.091276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.091566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.091606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.091879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.091915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.092172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.092208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.092515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.092551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.092813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.092848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.093153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.093190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.093392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.093428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.093714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.093749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.093892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.093927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.094139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.094175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.094445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.094480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.094777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.094812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.095101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.095137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.095347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.095383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.095676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.095711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.096015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.096050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.096253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.096288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.096567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.096602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.096804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.096839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.097126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.097162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.097377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.097412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.097634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.097668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.097868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.097903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.098126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.385 [2024-12-06 11:29:17.098163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.385 qpair failed and we were unable to recover it. 00:27:44.385 [2024-12-06 11:29:17.098367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.098400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.098597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.098632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.098849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.098885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.099098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.099140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.099387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.099422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.099715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.099750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.099969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.100003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.100289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.100326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.100613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.100648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.100926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.100961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.101176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.101213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.101471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.101506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.101706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.101740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.102018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.102053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.102272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.102307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.102531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.102566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.102698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.102733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.103029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.103078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.103347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.103382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.103587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.103622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.103889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.103923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.104239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.104274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.104577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.104613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.104794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.104829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.105056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.105103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.386 [2024-12-06 11:29:17.105390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.386 [2024-12-06 11:29:17.105426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.386 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.105713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.105748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.105946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.105980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.106243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.106280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.106466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.106500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.106690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.106724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.106874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.106909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.107226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.107263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.107585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.107620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.107918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.107953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.108221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.108258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.108550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.108584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.108830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.108865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.109150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.109186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.109472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.109507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.109787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.109822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.110074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.110110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.110359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.110394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.110516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.110550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.110684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.110724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.111013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.111048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.111353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.111388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.111573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.111607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.111892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.111927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.112126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.112163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.112426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.387 [2024-12-06 11:29:17.112462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.387 qpair failed and we were unable to recover it. 00:27:44.387 [2024-12-06 11:29:17.112693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.112728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.112914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.112948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.113216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.113252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.113536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.113570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.113779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.113814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.114015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.114050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.114351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.114385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.114590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.114625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.114861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.114897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.115034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.115082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.115379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.115413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.115620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.115655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.115939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.115973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.116133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.116169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.116306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.116341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.116612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.116646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.116854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.116888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.117005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.117039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.117338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.117375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.117611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.117645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.117917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.117964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.118153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.118191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.118479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.118514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.118794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.118829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.118984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.119019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.388 qpair failed and we were unable to recover it. 00:27:44.388 [2024-12-06 11:29:17.119355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.388 [2024-12-06 11:29:17.119391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.119601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.119637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.119755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.119790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.119979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.120015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.120248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.120284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.120543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.120579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.120783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.120818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.121092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.121129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.121313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.121347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.121611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.121646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.121936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.121970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.122120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.122156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.122415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.122450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.122565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.122600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.122906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.122941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.123073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.123110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.123232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.123266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.123530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.123564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.123778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.123813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.124022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.124075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.124293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.124329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.389 qpair failed and we were unable to recover it. 00:27:44.389 [2024-12-06 11:29:17.124536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.389 [2024-12-06 11:29:17.124571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.124773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.124808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.125120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.125157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.125273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.125309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.125520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.125556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.125796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.125830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.126079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.126115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.126387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.126422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.126610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.126645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.126904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.126940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.127140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.127175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.127386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.127421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.127658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.127693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.127927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.127959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.128095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.128128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.128263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.128300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.128410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.128441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.128568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.128600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.128816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.128848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.129052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.129098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.129357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.129387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.129647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.129678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.129903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.129935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.130129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.130162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.130449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.130480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.390 [2024-12-06 11:29:17.130788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.390 [2024-12-06 11:29:17.130818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.390 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.131090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.131122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.131426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.131460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.131657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.131690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.131904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.131938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.132173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.132208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.132391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.132424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.132702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.132735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.132923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.132955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.133141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.133177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.133458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.133490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.133779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.133815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.134096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.134132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.134318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.134353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.134635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.134669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.134873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.134907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.135107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.135143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.135463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.135503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.135717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.135752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.135971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.136006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.136180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.136216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.136356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.136391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.136593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.136629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.136934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.136967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.137176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.137213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.391 [2024-12-06 11:29:17.137343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.391 [2024-12-06 11:29:17.137378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.391 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.137606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.137642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.137854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.137890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.138025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.138072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.138305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.138340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.138559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.138594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.138787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.138823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.139025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.139073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.139260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.139295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.139557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.139592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.139748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.139783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.140082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.140117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.140253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.140287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.140492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.140527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.140734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.140767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.140950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.140986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.141259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.141296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.141520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.141555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.141777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.141812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.141996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.142031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.142196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.142231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.142421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.142457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.142719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.142754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.142951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.142987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.143175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.143210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.143498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.143533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.392 [2024-12-06 11:29:17.143773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.392 [2024-12-06 11:29:17.143808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.392 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.144123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.144159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.144451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.144486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.144792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.144826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.145101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.145140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.145429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.145463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.145748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.145783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.145973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.146013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.146242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.146277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.146480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.146516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.146810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.146846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.147109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.147144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.147342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.147378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.147561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.147596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.147819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.147854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.148130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.148166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.148355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.148392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.148663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.148697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.148896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.148930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.149198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.149235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.149457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.149493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.149717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.149752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.150042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.150103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.150336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.150371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.150677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.150713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.150926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.150960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.393 [2024-12-06 11:29:17.151225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.393 [2024-12-06 11:29:17.151263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.393 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.151527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.151562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.151860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.151895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.152170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.152206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.152395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.152430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.152635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.152670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.152874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.152909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.153174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.153212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.153422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.153457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.153657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.153692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.153882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.153917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.154179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.154216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.154499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.154536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.154819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.154853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.155146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.155183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.155460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.155495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.155722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.155757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.155983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.156018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.156220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.156257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.156375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.156409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.156609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.156643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.156855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.156890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.157098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.157135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.157448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.157482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.157747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.157784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.157986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.158020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.394 [2024-12-06 11:29:17.158179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.394 [2024-12-06 11:29:17.158213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.394 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.158523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.158558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.158819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.158854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.159192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.159227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.159449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.159484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.159825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.159860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.160176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.160212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.160472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.160508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.160633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.160669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.160934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.160968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.161182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.161220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.161534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.161570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.161813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.161847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.162161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.162195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.162388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.162425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.162684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.162719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.162938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.162974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.163187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.163224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.163407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.163443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.163631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.163666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.163875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.163910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.164196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.164232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.164534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.164569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.164776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.164818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.165084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.165121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.165253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.165287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.395 [2024-12-06 11:29:17.165550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-12-06 11:29:17.165585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.395 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.165774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.165811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.166027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.166074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.166294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.166329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.166640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.166677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.166932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.166966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.167168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.167204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.167486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.167522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.167832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.167866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.168158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.168194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.168323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.168358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.168638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.168674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.168982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.169017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.169274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.169309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.169421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.169457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.169718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.169753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.170043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.170088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.170336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.170371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.170561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.170595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.170859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.170893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.171154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-12-06 11:29:17.171190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.396 qpair failed and we were unable to recover it. 00:27:44.396 [2024-12-06 11:29:17.171482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.171516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.171724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.171758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.171966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.172002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.172208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.172245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.172452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.172488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.172693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.172732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.172862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.172898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.173113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.173149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.173380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.173414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.173638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.173673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.173930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.173965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.174182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.174217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.174408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.174441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.174648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.174682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.174878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.174913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.175113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.175150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.175361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.175397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.175511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.175552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.175683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.175719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.175831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.175864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.176088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.176125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.176313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.176349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.176574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.176610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.176877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.176912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.177108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.177145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.177397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.177432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.397 qpair failed and we were unable to recover it. 00:27:44.397 [2024-12-06 11:29:17.177673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-12-06 11:29:17.177708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.177890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.177925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.178115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.178152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.178436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.178472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.178678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.178713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.178977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.179012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.179241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.179278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.179495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.179529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.179717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.179752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.180043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.180092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.180309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.180345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.180466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.180501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.180703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.180738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.180964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.181000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.181232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.181269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.181459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.181494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.181773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.181809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.182082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.182118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.182268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.182310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.182499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.182533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.182718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.182752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.182976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.183011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.183221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.183256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.183518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.183554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.183712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.183748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.184005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.184041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.184351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.184385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.184647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.184682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.184895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.184930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.185190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.185227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.185497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.185531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.398 qpair failed and we were unable to recover it. 00:27:44.398 [2024-12-06 11:29:17.185717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-12-06 11:29:17.185752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.185970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.186005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.186279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.186315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.186507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.186542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.186851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.186887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.187120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.187155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.187347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.187383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.187576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.187611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.187833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.187867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.188057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.188104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.188456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.188490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.188815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.188850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.189000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.189035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.189334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.189370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.189504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.189538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.189760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.189795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.190001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.190037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.190259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.190296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.190418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.190453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.190665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.190700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.190890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.190924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.191040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.191089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.191395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.191430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.191617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.191652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.191941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.191976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.192104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.192139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.192452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.192489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.192773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.192810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.193075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.193120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.193336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.193371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.193583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.193618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.193878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.193913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.194054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.194102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.194414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.194448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.399 [2024-12-06 11:29:17.194606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.399 [2024-12-06 11:29:17.194641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.399 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.194905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.194940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.195137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.195173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.195383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.195420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.195559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.195595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.195855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.195892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.196102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.196138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.196424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.196459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.196600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.196634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.196920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.196955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.197089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.197124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.197408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.197443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.197662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.197697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.198007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.198043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.198295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.198330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.198636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.198672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.198801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.198838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.199133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.199168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.199407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.199441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.199760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.199795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.199915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.199950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.200139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.200181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.200494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.200528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.200637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.200671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.200857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.200892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.201162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.201199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.201460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.201495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.201786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.201823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.202129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.202165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.202457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.202491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.202764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.202799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.203098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.203135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.203291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.203326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.203481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.203516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.400 qpair failed and we were unable to recover it. 00:27:44.400 [2024-12-06 11:29:17.203741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.400 [2024-12-06 11:29:17.203776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.204008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.204043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.204249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.204287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.204483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.204518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.204787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.204821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.205018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.205053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.205296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.205332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.205467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.205501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.205783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.205818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.206070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.206107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.206316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.206351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.206559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.206594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.206846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.206881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.207137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.207173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.207301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.207335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.207461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.207495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.207719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.207753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.207986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.208022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.208250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.208287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.208494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.208529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.208661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.208696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.208908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.208944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.209152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.209188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.209440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.209475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.209626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.209661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.209955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.209989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.210163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.210199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.210398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.210433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.210657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.210697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.210893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.401 [2024-12-06 11:29:17.210928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.401 qpair failed and we were unable to recover it. 00:27:44.401 [2024-12-06 11:29:17.211138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.211174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.211369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.211405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.211526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.211561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.211800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.211834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.212096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.212132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.212361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.212396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.212528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.212563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.212771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.212807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.213051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.213098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.213235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.213270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.213554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.213590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.213851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.213885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.214191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.214227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.214429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.214464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.214592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.214625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.214945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.214980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.215274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.215312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.215495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.215529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.215852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.215886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.216104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.216141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.216358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.216392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.216573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.216607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.216871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.216905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.217161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.217196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.217417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.217452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.217748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.217788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.217921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.217956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.218188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.218224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.218373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.218407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.218703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.218739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.219009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.219045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.219196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.219231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.219359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.219393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.219571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.219605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.402 qpair failed and we were unable to recover it. 00:27:44.402 [2024-12-06 11:29:17.219903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.402 [2024-12-06 11:29:17.219939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.220140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.220177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.220389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.220423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.220613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.220648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.220933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.220968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.221164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.221200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.221397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.221431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.221699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.221734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.222023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.222069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.222333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.222369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.222606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.222641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.222874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.222908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.223103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.223139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.223366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.223400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.223561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.223596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.223877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.223912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.224133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.224169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.224370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.224405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.224636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.224671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.224968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.225003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.225290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.225325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.225518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.225552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.225704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.225738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.226015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.226049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.226251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.226287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.226472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.226507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.226790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.226825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.226956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.226991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.227262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.227297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.227591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.227626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.227874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.227908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.228227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.228263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.228524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.228566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.228776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.228810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.403 [2024-12-06 11:29:17.228991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.403 [2024-12-06 11:29:17.229026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.403 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.229326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.229362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.229494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.229530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.229788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.229822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.230012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.230047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.230266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.230301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.230446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.230480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.230773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.230808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.231076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.231112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.231377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.231412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.231702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.231736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.231942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.231977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.232276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.232313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.232626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.232660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.232901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.232935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.233143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.233179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.233471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.233505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.233747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.233782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.234007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.234041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.234249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.234284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.234541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.234575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.234718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.234752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.235012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.235047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.235257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.235290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.235475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.235508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.235703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.235737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.236016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.236050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.236184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.236218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.236501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.236536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.236808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.236841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.237028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.237073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.237361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.237395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.237706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.237741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.237937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.237971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.238237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.404 [2024-12-06 11:29:17.238274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.404 qpair failed and we were unable to recover it. 00:27:44.404 [2024-12-06 11:29:17.238486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.238522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.238797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.238832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.239123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.239159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.239434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.239469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.239787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.239823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.240027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.240081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.240356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.240391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.240594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.240629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.240750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.240784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.240965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.241000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.241184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.241220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.241424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.241458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.241746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.241781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.241990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.242025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.242150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.242186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.242472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.242507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.242781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.242816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.242933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.242967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.243163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.243199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.243422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.243457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.243769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.243803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.244037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.244096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.244387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.244421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.244626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.244661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.244921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.244956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.245140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.245176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.245360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.245395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.245682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.245717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.246000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.246034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.246317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.246351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.246591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.246626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.246941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.246981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.405 qpair failed and we were unable to recover it. 00:27:44.405 [2024-12-06 11:29:17.247254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.405 [2024-12-06 11:29:17.247290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.247582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.247617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.247754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.247788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.248006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.248041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.248358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.248394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.248597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.248631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.248946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.248980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.249194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.249230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.249414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.249448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.249647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.249681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.249892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.249926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.250144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.250179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.250442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.250476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.250603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.250638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.250763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.250797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.250986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.251020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.251332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.251369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.251628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.251663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.251879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.251913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.252121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.252156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.252362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.252396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.252610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.252645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.252844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.252879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.253082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.253118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.253309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.253343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.253569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.253604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.253787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.253822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.254044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.254090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.254205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.254237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.254559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.254595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.254885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.254920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.255189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.406 [2024-12-06 11:29:17.255225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.406 qpair failed and we were unable to recover it. 00:27:44.406 [2024-12-06 11:29:17.255457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.255492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.255683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.255718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.255857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.255892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.256095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.256132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.256315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.256350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.256621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.256656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.256915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.256950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.257137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.257172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.257303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.257338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.257631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.257665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.257809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.257844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.258032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.258077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.258291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.258327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.258652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.258687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.258873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.258907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.259191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.259228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.259351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.259384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.259592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.259626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.259846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.259880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.260015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.260049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.260289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.260324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.260509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.260543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.260683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.260718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.260922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.260957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.261145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.261181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.261311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.261346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.261659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.261693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.261967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.262000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.262214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.262250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.407 [2024-12-06 11:29:17.262506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.407 [2024-12-06 11:29:17.262541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.407 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.262852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.262886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.263111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.263147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.263410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.263446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.263732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.263766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.263959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.263993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.264179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.264221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.264482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.264516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.264715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.264750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.264974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.265008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.265230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.265267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.265376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.265411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.265670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.265705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.265841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.265876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.266168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.266204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.266336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.266370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.266568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.266603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.266832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.266867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.266999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.267034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.267187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.267221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.267485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.267521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.267732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.267766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.267983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.268018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.268171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.268208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.268403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.268439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.268644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.268678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.268967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.269002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.269208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.269256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.269549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.269583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.269778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.269812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.269947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.269982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.270205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.270242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.270456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.270490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.270793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.270827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.408 [2024-12-06 11:29:17.271068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.408 [2024-12-06 11:29:17.271105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.408 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.271302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.271337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.271543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.271578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.271692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.271727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.272009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.272044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.272351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.272386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.272496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.272530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.272791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.272826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.272961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.272996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.273191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.273227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.273481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.273516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.273637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.273672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.273880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.273914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.274048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.274100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.274425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.274460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.274661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.274695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.274817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.274851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.275038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.275088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.275377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.275412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.275692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.275726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.275928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.275964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.276179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.276215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.276415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.276449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.276653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.276688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.276882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.276917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.277128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.277163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.277373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.277409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.277556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.277592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.277785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.277820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.277957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.277992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.278178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.278214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.278340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.278384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.278544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.278590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.278917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.278970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.279336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.409 [2024-12-06 11:29:17.279389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.409 qpair failed and we were unable to recover it. 00:27:44.409 [2024-12-06 11:29:17.279540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.279591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.279909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.279960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.280135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.280188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.280411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.280461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.280691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.280741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.280985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.281031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.281278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.281314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.281516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.281549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.281745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.281779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.281961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.281995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.282150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.282186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.282308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.282341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.282555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.282590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.282801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.282835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.282944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.282978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.283114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.283150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.283276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.283310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.283524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.283558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.283775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.283809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.284019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.284053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.284275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.284309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.284563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.284597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.284892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.284926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.285195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.285232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.285520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.285554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.285848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.285882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.286090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.286126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.286309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.286342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.286652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.286687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.286919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.286954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.287145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.287181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.287368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.287401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.287660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.287694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.287987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.410 [2024-12-06 11:29:17.288022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.410 qpair failed and we were unable to recover it. 00:27:44.410 [2024-12-06 11:29:17.288351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.411 [2024-12-06 11:29:17.288387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.411 qpair failed and we were unable to recover it. 00:27:44.411 [2024-12-06 11:29:17.288571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.411 [2024-12-06 11:29:17.288605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.411 qpair failed and we were unable to recover it. 00:27:44.411 [2024-12-06 11:29:17.288795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.411 [2024-12-06 11:29:17.288828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.411 qpair failed and we were unable to recover it. 00:27:44.411 [2024-12-06 11:29:17.288947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.411 [2024-12-06 11:29:17.288981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.411 qpair failed and we were unable to recover it. 00:27:44.411 [2024-12-06 11:29:17.289185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.411 [2024-12-06 11:29:17.289222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.411 qpair failed and we were unable to recover it. 00:27:44.411 [2024-12-06 11:29:17.289533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.411 [2024-12-06 11:29:17.289566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.411 qpair failed and we were unable to recover it. 00:27:44.411 [2024-12-06 11:29:17.289771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.411 [2024-12-06 11:29:17.289805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.411 qpair failed and we were unable to recover it. 00:27:44.411 [2024-12-06 11:29:17.290028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.411 [2024-12-06 11:29:17.290287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.411 qpair failed and we were unable to recover it. 00:27:44.691 [2024-12-06 11:29:17.290584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.691 [2024-12-06 11:29:17.290619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.691 qpair failed and we were unable to recover it. 00:27:44.691 [2024-12-06 11:29:17.290805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.691 [2024-12-06 11:29:17.290838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.691 qpair failed and we were unable to recover it. 00:27:44.691 [2024-12-06 11:29:17.291047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.691 [2024-12-06 11:29:17.291102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.691 qpair failed and we were unable to recover it. 00:27:44.691 [2024-12-06 11:29:17.291367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.691 [2024-12-06 11:29:17.291400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.691 qpair failed and we were unable to recover it. 00:27:44.691 [2024-12-06 11:29:17.291587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.691 [2024-12-06 11:29:17.291628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.691 qpair failed and we were unable to recover it. 00:27:44.691 [2024-12-06 11:29:17.291914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.691 [2024-12-06 11:29:17.291948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.691 qpair failed and we were unable to recover it. 00:27:44.691 [2024-12-06 11:29:17.292079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.292114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.292241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.292275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.292492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.292526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.292712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.292747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.293009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.293043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.293341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.293378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.293684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.293718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.293916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.293950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.294084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.294120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.294406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.294440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.294560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.294594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.294772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.294805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.295004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.295039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.295250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.295285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.295424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.295456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.295638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.295672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.295850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.295883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.296090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.296126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.296313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.296347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.296535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.296569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.296850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.296883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.297012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.297047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.297282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.297316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.297443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.297477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.297610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.297644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.297843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.297883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.298081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.298116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.298327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.298361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.298648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.298682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.298891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.298925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.299048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.299094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.299226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.299260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.692 [2024-12-06 11:29:17.299455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.692 [2024-12-06 11:29:17.299488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.692 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.299613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.299647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.299853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.299886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.300077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.300112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.300230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.300281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.300531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.300565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.300753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.300787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.301041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.301090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.301349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.301383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.301528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.301562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.301685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.301718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.301909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.301944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.302129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.302165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.302346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.302380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.302579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.302614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.302820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.302854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.303032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.303076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.303268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.303301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.303433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.303467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.303590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.303622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.303872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.303906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.304028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.304074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.304358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.304392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.304570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.304603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.304865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.304898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.305112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.305147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.305274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.305308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.305550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.305583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.305835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.305867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.305984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.306019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.306228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.306262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.306440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.306474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.306741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.306774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.306881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.693 [2024-12-06 11:29:17.306914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.693 qpair failed and we were unable to recover it. 00:27:44.693 [2024-12-06 11:29:17.307104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.307145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.307343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.307376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.307591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.307626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.307764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.307797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.307927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.307961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.308177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.308212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.308330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.308362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.308563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.308597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.308704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.308737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.308920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.308953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.309149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.309185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.309367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.309399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.309576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.309609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.309804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.309838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.310019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.310052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.310246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.310280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.310477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.310511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.310690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.310722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.310996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.311029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.311246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.311281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.311590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.311623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.311871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.311904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.312086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.312122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.312243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.312276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.312487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.312520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.312786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.312819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.313030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.313090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.313211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.313250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.313476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.313509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.313780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.313813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.314093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.314128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.314349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.314382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.314660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.314695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.314834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.694 [2024-12-06 11:29:17.314866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.694 qpair failed and we were unable to recover it. 00:27:44.694 [2024-12-06 11:29:17.315123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.315158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.315361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.315395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.315501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.315533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.315658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.315691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.315906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.315940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.316139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.316174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.316456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.316489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.316687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.316722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.316942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.316975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.317164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.317198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.317334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.317368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.317547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.317580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.317787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.317820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.317939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.317973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.318230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.318263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.318440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.318472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.318723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.318757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.318879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.318912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.319191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.319226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.319460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.319493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.319769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.319802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.319935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.319968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.320154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.320189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.320444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.320477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.320745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.320779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.320908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.320941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.321161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.321196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.321467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.321500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.321746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.321779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.322055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.322099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.322312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.322347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.322525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.322558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.322778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.322812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.323020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.695 [2024-12-06 11:29:17.323053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.695 qpair failed and we were unable to recover it. 00:27:44.695 [2024-12-06 11:29:17.323276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.323316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.323505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.323538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.323717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.323751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.323887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.323920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.324171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.324207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.324418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.324451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.324625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.324658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.324834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.324867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.325040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.325084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.325357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.325390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.325636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.325669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.325779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.325812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.325985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.326019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.326326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.326361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.326591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.326625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.326802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.326835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.327009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.327042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.327289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.327322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.327440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.327473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.327671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.327705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.327893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.327927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.328034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.328079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.328274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.328307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.328584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.328617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.328829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.328862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.329081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.329116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.329306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.329339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.329533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.329565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.329796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.329830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.330081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.330115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.330245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.330279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.696 qpair failed and we were unable to recover it. 00:27:44.696 [2024-12-06 11:29:17.330416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.696 [2024-12-06 11:29:17.330449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.330569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.330602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.330863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.330896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.331148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.331182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.331434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.331468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.331579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.331612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.331887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.331921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.332044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.332088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.332279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.332311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.332454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.332488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.332674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.332708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.332984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.333017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.333223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.333258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.333494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.333527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.333830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.333863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.333991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.334024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.334232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.334266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.334483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.334516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.334706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.334738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.334919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.334952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.335086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.335121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.335303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.335337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.335623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.335656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.335950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.335983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.336167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.336202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.336446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.336479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.336669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.336703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.336983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.337015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.337311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.337344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.337550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.337583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.337804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.697 [2024-12-06 11:29:17.337838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.697 qpair failed and we were unable to recover it. 00:27:44.697 [2024-12-06 11:29:17.338031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.338073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.338260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.338294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.338426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.338458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.338594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.338627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.338877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.338910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.339104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.339139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.339317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.339355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.339529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.339561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.339841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.339873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.340001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.340035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.340275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.340308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.340481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.340514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.340710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.340743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.340919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.340952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.341123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.341157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.341342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.341375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.341480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.341512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.341623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.341656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.341863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.341896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.342088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.342122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.342255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.342289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.342419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.342452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.342640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.342672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.342953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.342986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.343198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.343233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.343448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.343481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.343679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.343711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.343897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.343930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.344178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.344213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.344428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.344461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.344591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.344625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.344814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.344847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.345029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.345072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.345250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.345283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.698 qpair failed and we were unable to recover it. 00:27:44.698 [2024-12-06 11:29:17.345420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.698 [2024-12-06 11:29:17.345454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.345739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.345772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.346048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.346090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.346210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.346243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.346461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.346495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.346694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.346726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.346929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.346962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.347098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.347133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.347324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.347356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.347566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.347598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.347809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.347841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.348028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.348072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.348320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.348353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.348606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.348645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.348788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.348821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.348940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.348972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.349189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.349224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.349365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.349398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.349584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.349617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.349800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.349833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.350070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.350105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.350221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.350254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.350434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.350468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.350646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.350678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.350862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.350895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.351081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.351114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.351314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.351348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.351656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.351690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.351877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.351910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.352134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.352169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.352391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.352423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.352595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.352629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.352819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.352851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.353072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.353107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.699 qpair failed and we were unable to recover it. 00:27:44.699 [2024-12-06 11:29:17.353308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.699 [2024-12-06 11:29:17.353341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.353533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.353566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.353691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.353723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.354022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.354056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.354293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.354325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.354518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.354551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.354766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.354804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.354931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.354964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.355142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.355176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.355353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.355386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.355509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.355542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.355726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.355757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.355958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.355991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.356105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.356139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.356250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.356282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.356474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.356506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.356683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.356717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.356969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.357002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.357130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.357164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.357374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.357406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.357591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.357624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.357800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.357832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.358020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.358054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.358196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.358229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.358431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.358464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.358657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.358690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.358924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.358957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.359151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.359186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.359443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.359474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.359677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.359712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.359919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.359951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.360078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.360112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.360359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.360391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.360599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.360632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.700 [2024-12-06 11:29:17.360772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.700 [2024-12-06 11:29:17.360805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.700 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.360935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.360967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.361105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.361140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.361331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.361363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.361642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.361675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.361857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.361889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.362124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.362158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.362375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.362408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.362618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.362651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.362833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.362865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.363055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.363112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.363302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.363336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.363526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.363560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.363742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.363779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.363978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.364011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.364133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.364167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.364380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.364412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.364611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.364643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.364818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.364852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.365071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.365105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.365211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.365244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.365368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.365401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.365534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.365566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.365768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.365802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.366015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.366049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.366197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.366230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.366443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.366476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.366747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.366779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.366966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.366998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.367132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.367167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.367291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.367324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.367500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.367532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.367717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.367749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.367931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.701 [2024-12-06 11:29:17.367964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-12-06 11:29:17.368161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.368196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.368298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.368330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.368458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.368491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.368621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.368654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.368835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.368867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.369006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.369039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.369253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.369292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.369543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.369575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.369813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.369847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.369982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.370014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.370235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.370269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.370405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.370437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.370650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.370683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.370831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.370864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.371087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.371122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.371299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.371331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.371515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.371548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.371718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.371751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.371991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.372024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.372167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.372200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.372384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.372417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.372592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.372625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.372741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.372773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.373043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.373088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.373219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.373252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.373366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.373398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.373529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.373563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.373740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.373773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.374014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.374046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-12-06 11:29:17.374235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.702 [2024-12-06 11:29:17.374269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.374392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.374426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.374609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.374642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.374824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.374858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.375032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.375095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.375297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.375330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.375504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.375536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.375732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.375765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.375880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.375913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.376024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.376056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.376323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.376356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.376457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.376490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.376691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.376723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.376973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.377006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.377142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.377177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.377422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.377454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.377755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.377788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.377973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.378007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.378281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.378320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.378582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.378614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.378803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.378835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.379094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.379128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.379239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.379272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.379535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.379569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.379691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.379723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.379924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.379956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.380175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.380209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.380316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.380348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.380462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.380494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.380674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.380706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.380887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.380921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.381120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.381154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.381286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.381319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.381428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.381459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-12-06 11:29:17.381576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.703 [2024-12-06 11:29:17.381609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.381735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.381768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.381959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.381991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.382181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.382216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.382399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.382431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.382560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.382593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.382771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.382805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.383112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.383146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.383280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.383313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.383492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.383526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.383698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.383731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.383927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.383966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.384152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.384187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.384407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.384440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.384623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.384656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.384829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.384862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.385118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.385152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.385334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.385367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.385483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.385517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.385721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.385753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.385970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.386003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.386265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.386298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.386569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.386602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.386787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.386819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.387117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.387152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.387291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.387325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.387538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.387571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.387837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.387870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.388056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.388102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.388223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.388255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.388361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.388393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.388600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.388633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.388892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.388924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.389073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.389106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.704 qpair failed and we were unable to recover it. 00:27:44.704 [2024-12-06 11:29:17.389220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.704 [2024-12-06 11:29:17.389252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.389372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.389406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.389595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.389628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.389799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.389832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.390082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.390117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.390316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.390349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.390548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.390580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.390707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.390741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.390945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.390977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.391102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.391137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.391268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.391301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.391423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.391456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.391561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.391595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.391714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.391748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.392023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.392056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.392363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.392397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.392570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.392603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.392720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.392760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.392944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.392992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.393274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.393308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.393504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.393537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.393835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.393867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.394052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.394099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.394422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.394455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.394755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.394789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.394986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.395018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.395149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.395183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.395377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.395409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.395645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.395678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.395948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.395980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.396165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.396199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.396326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.396358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.705 [2024-12-06 11:29:17.396567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.705 [2024-12-06 11:29:17.396601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.705 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.396717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.396749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.396918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.396951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.397141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.397177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.397357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.397389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.397604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.397638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.397759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.397791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.397990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.398022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.398234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.398268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.398392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.398424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.398538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.398571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.398786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.398818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.398949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.398982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.399101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.399135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.399318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.399351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.399472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.399504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.399627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.399660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.399834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.399868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.400138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.400174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.400348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.400381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.400577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.400609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.400736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.400769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.400943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.400976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.401165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.401199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.401317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.401350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.401539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.401573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.401748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.401780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.401906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.401939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.402069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.402103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.402302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.402334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.402568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.402602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.402783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.402816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.402948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.402984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.403191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.706 [2024-12-06 11:29:17.403226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.706 qpair failed and we were unable to recover it. 00:27:44.706 [2024-12-06 11:29:17.403430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.403460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.403731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.403765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.403942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.403975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.404087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.404122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.404337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.404369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.404477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.404510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.404688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.404721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.404981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.405015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.405218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.405252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.405382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.405414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.405603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.405636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.405911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.405945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.406077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.406112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.406296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.406330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.406609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.406641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.406758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.406792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.406927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.406959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.407099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.407134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.407429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.407462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.407582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.407617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.407887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.407925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.408040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.408086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.408194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.408227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.408344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.408378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.408679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.408712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.408817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.408850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.409033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.409075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.409263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.409296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.409473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.409505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.409628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.707 [2024-12-06 11:29:17.409661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.707 qpair failed and we were unable to recover it. 00:27:44.707 [2024-12-06 11:29:17.409876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.409909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.410101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.410135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.410346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.410378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.410549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.410582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.410783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.410816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.410998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.411030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.411338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.411371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.411508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.411540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.411725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.411758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.411950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.411984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.412239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.412273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.412452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.412485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.412690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.412723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.412942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.412974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.413101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.413136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.413245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.413276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.413548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.413581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.413828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.413860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.413980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.414014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.414368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.414402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.414527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.414561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.414764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.414796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.414912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.414944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.415093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.415127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.415306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.415340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.415561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.415593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.415705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.415738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.415917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.415950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.416200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.416234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.416455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.416487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.416755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.416789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.416984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.417018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.417216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.417249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.417492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.708 [2024-12-06 11:29:17.417525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.708 qpair failed and we were unable to recover it. 00:27:44.708 [2024-12-06 11:29:17.417726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.417760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.418016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.418049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.418231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.418265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.418461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.418494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.418611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.418643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.418766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.418798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.419002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.419036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.419227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.419261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.419438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.419470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.419591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.419624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.419810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.419842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.420052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.420099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.420289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.420322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.420436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.420470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.420585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.420618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.420790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.420822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.420940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.420973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.421079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.421113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.421221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.421254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.421433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.421466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.421665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.421697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.421806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.421839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.421946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.421979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.422214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.422248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.422442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.422481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.422679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.422712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.422907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.422940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.423195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.423229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.423367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.423400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.423516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.423549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.423655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.423689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.423959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.423991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.424186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.424220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.424406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.709 [2024-12-06 11:29:17.424438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.709 qpair failed and we were unable to recover it. 00:27:44.709 [2024-12-06 11:29:17.424642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.424676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.424794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.424826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.425000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.425033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.425295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.425328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.425464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.425497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.425690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.425722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.425930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.425964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.426082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.426116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.426303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.426335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.426543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.426575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.426834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.426868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.426986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.427017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.427148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.427182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.427426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.427458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.427629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.427661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.427908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.427940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.428142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.428177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.428305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.428337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.428612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.428645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.428865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.428897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.429144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.429178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.429315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.429347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.429464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.429497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.429612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.429644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.429889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.429922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.430110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.430143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.430393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.430425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.430601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.430634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.430840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.430872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.430980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.431014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.431166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.431200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.431384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.431423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.431542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.431575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.710 [2024-12-06 11:29:17.431827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.710 [2024-12-06 11:29:17.431859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.710 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.432069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.432103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.432406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.432439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.432550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.432582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.432834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.432867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.433088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.433120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.433305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.433338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.433456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.433488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.433608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.433640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.433810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.433843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.434080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.434114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.434243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.434276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.434385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.434417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.434527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.434560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.434669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.434701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.434810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.434843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.435033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.435089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.435284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.435317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.435511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.435544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.435661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.435694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.435891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.435923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.436236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.436272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.436390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.436422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.436546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.436579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.436846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.436879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.437057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.437106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.437242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.437274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.437457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.437490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.437664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.437696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.437817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.437850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.437981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.438013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.438220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.438254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.438439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.438471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.438593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.438625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.711 [2024-12-06 11:29:17.438751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.711 [2024-12-06 11:29:17.438784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.711 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.438961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.438995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.439272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.439305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.439447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.439479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.439706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.439738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.439873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.439906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.440152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.440187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.440377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.440409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.440529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.440562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.440760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.440792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.440980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.441013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.441236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.441270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.441385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.441418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.441521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.441555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.441742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.441774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.441894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.441926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.442035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.442079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.442328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.442360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.442557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.442590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.442813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.442846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.443018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.443051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.443207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.443240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.443485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.443517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.443699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.443732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.443910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.443943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.444159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.444193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.444308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.444341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.444515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.444547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.444739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.444771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.444956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.444990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.712 [2024-12-06 11:29:17.445104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.712 [2024-12-06 11:29:17.445138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.712 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.445246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.445278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.445490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.445530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.445642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.445675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.445790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.445824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.446014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.446046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.446176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.446209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.446452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.446485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.446587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.446620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.446822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.446855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.446970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.447002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.447133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.447168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.447280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.447313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.447490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.447522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.447697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.447731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.447914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.447946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.448094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.448128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.448385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.448418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.448615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.448648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.448877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.448909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.449084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.449118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.449316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.449348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.449534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.449567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.449755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.449788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.450048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.450091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.450228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.450260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.450461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.450493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.450598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.450630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.450842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.450875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.451091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.451131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.451304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.451336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.451505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.451538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.451710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.451742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.451920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.451954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.713 [2024-12-06 11:29:17.452200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.713 [2024-12-06 11:29:17.452235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.713 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.452363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.452395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.452515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.452549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.452793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.452825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.453123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.453158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.453287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.453319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.453442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.453474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.453649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.453681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.453784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.453817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.454129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.454202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.454476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.454548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.454688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.454726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.454857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.454891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.455085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.455119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.455292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.455327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.455499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.455531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.455710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.455744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.455996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.456029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.456173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.456209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.456333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.456366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.456486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.456519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.456767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.456799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.456995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.457033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.457311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.457345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.457554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.457586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.457703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.457735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.457908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.457941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.458214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.458248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.458517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.458550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.458738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.458771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.458973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.459006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.459168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.459202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.459405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.459438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.459719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.459751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.459925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.714 [2024-12-06 11:29:17.459957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.714 qpair failed and we were unable to recover it. 00:27:44.714 [2024-12-06 11:29:17.460145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.460178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.460400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.460437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.460624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.460657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.460832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.460865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.460982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.461015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.461273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.461306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.461491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.461525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.461705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.461738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.461857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.461890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.462138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.462173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.462345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.462377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.462492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.462525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.462804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.462837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.463102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.463135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.463315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.463358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.463543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.463575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.463770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.463803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.463983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.464017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.464234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.464267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.464540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.464573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.464757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.464789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.464989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.465022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.465159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.465193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.465446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.465479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.465610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.465642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.465824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.465857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.466041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.466085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.466261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.466294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.466412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.466446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.466636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.466670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.466935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.466969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.467180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.715 [2024-12-06 11:29:17.467215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.715 qpair failed and we were unable to recover it. 00:27:44.715 [2024-12-06 11:29:17.467486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.467518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.467714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.467747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.467923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.467956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.468230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.468264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.468451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.468484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.468600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.468633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.468821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.468854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.468972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.469005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.469116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.469150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.469424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.469458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.469658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.469691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.469870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.469903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.470106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.470140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.470320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.470353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.470456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.470489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.470673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.470705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.470906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.470939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.471129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.471163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.471287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.471320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.471436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.471468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.471646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.471678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.471789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.471822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.471947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.471986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.472177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.472211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.472335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.472368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.472479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.472511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.472726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.472760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.472886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.472919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.473093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.473126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.473299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.473332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.473460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.473492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.473675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.716 [2024-12-06 11:29:17.473708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.716 qpair failed and we were unable to recover it. 00:27:44.716 [2024-12-06 11:29:17.473901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.473934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.474045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.474085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.474212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.474245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.474350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.474383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.474595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.474629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.474805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.474838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.475104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.475139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.475332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.475365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.475541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.475573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.475819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.475851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.476142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.476177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.476349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.476382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.476602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.476635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.476762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.476794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.476979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.477013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.477138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.477172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.477356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.477388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.477685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.477758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.477998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.478034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.478259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.478294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.478498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.478531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.478728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.478760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.479007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.479040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.479242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.479275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.479531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.479563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.479680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.479712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.479971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.480003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.480131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.480163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.717 qpair failed and we were unable to recover it. 00:27:44.717 [2024-12-06 11:29:17.480358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.717 [2024-12-06 11:29:17.480391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.480659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.480690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.480809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.480842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.481123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.481157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.481392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.481424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.481562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.481594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.481783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.481815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.482079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.482114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.482249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.482283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.482466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.482498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.482684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.482716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.482819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.482852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.483056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.483099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.483901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.483936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.484052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.484095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.484270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.484302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.484489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.484522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.484643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.484676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.484919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.484951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.485140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.485174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.485284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.485316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.485485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.485518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.485641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.485674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.485863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.485895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.486083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.486117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.486390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.486422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.718 qpair failed and we were unable to recover it. 00:27:44.718 [2024-12-06 11:29:17.486538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.718 [2024-12-06 11:29:17.486572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.486836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.486868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.486984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.487015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.487300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.487341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.487542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.487574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.487709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.487742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.487858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.487891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.488010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.488042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.488170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.488203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.488385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.488417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.488587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.488620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.488894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.488926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.489137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.489171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.489415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.489448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.489658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.489691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.489940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.489973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.490104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.490138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.490417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.490449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.490659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.490690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.490808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.490841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.491030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.491071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.491249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.491281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.491476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.491508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.491614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.491646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.491825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.492039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.492080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.492369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.492402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.492580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.492612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.492857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.492888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.493075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.493110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.493314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.493346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.493641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.493673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.493797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.493830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.719 [2024-12-06 11:29:17.494113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.719 [2024-12-06 11:29:17.494147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.719 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.494433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.494464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.494571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.494603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.494801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.494833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.495004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.495036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.495166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.495199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.495471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.495503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.495608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.495641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.495840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.495872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.496087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.496120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.496249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.496293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.496563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.496595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.496893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.496925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.497209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.497242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.497427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.497459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.497577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.497609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.497810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.497842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.498017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.498050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.498239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.498271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.498452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.498484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.498723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.498755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.498941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.498973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.499085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.499118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.499360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.499392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.499596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.499629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.499806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.499837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.500079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.500113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.500288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.500320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.500451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.500483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.500653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.500686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.500867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.720 [2024-12-06 11:29:17.500900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.720 qpair failed and we were unable to recover it. 00:27:44.720 [2024-12-06 11:29:17.501078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.501112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.501317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.501350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.501466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.501499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.501614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.501646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.501781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.501813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.501919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.501951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.502203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.502237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.502443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.502476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.502728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.502761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.502884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.502917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.503037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.503090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.503362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.503394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.503676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.503708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.503973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.504005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.504147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.504180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.504451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.504483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.504724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.504758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.504999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.505032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.505219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.505251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.505373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.505412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.505594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.505627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.505829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.505861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.506084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.506119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.506312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.506344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.506516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.506549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.506725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.506759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.506972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.507004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.507163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.507197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.507467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.721 [2024-12-06 11:29:17.507500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.721 qpair failed and we were unable to recover it. 00:27:44.721 [2024-12-06 11:29:17.507682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.507714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.507905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.507937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.508119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.508153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.508373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.508406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.508664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.508696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.508810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.508840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.509010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.509043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.509173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.509205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.509475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.509508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.509804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.509837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.510021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.510053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.510183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.510216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.510489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.510521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.510722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.510755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.511021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.511053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.511191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.511224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.511361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.511394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.511570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.511603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.511845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.511878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.512120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.512157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.512342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.512375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.512476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.512509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.512809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.512842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.513044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.513087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.513228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.513261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.513372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.513404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.513519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.513552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.513756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.513789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.513971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.514003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.514233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.514266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.514405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.514444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.514617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.514650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.514772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.722 [2024-12-06 11:29:17.514805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.722 qpair failed and we were unable to recover it. 00:27:44.722 [2024-12-06 11:29:17.514984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.515017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.515241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.515275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.515451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.515483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.515674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.515707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.515883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.515916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.516138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.516172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.516372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.516405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.516589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.516622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.516806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.516839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.516952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.516984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.517252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.517286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.517536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.517569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.517837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.517870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.518002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.518035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.518221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.518254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.518502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.518534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.518721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.518754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.518873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.518906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.519086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.519120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.519240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.519272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.519386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.519419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.723 [2024-12-06 11:29:17.519535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.723 [2024-12-06 11:29:17.519568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.723 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.519772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.519804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.520014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.520047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.520270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.520304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.520403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.520435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.520625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.520658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.520871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.520903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.521093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.521127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.521321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.521353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.521543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.521576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.521761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.521793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.522003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.522035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.522234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.522268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.522451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.522484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.522652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.522685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.522801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.522833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.523038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.523096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.523203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.523236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.523511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.523543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.523809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.523842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.524042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.524083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.524330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.524363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.524565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.524598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.524790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.524821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.525003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.525036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.525246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.525279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.525487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.525520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.525718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.525751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.525949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.525981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.526161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.526195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.526408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.526441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.526626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.526658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.526932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.526965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.527155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.724 [2024-12-06 11:29:17.527188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.724 qpair failed and we were unable to recover it. 00:27:44.724 [2024-12-06 11:29:17.527310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.527343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.527463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.527495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.527696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.527730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.527905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.527938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.528056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.528098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.528291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.528323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.528512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.528544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.528786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.528818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.529087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.529121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.529263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.529296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.529408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.529440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.529555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.529588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.529785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.529818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.530028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.530071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.530329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.530361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.530536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.530569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.530837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.530869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.531101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.531136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.531264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.531297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.531503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.531536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.531747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.531780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.531900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.531932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.532048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.532098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.532202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.532235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.532423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.532456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.532640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.532672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.532882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.532916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.533192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.533225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.533405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.533437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.533608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.533641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.533887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.725 [2024-12-06 11:29:17.533919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.725 qpair failed and we were unable to recover it. 00:27:44.725 [2024-12-06 11:29:17.534124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.534157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.534433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.534466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.534637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.534669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.534906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.534939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.535136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.535170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.535361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.535394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.535627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.535659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.535843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.535875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.536123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.536157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.536328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.536360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.536598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.536631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.536859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.536892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.536992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.537024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.537221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.537255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.537471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.537504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.537770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.537802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.537979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.538012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.538203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.538237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.538426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.538459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.538635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.538667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.538909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.538941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.539122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.539156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.539432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.539465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.539726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.539759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.539969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.540001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.540142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.540175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.540444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.726 [2024-12-06 11:29:17.540476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.726 qpair failed and we were unable to recover it. 00:27:44.726 [2024-12-06 11:29:17.540671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.540703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.540829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.540862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.541044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.541084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.541290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.541323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.541521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.541565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.541745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.541777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.542047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.542089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.542335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.542369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.542640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.542673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.542805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.542838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.542950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.542982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.543101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.543136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.543253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.543286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.543398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.543431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.543697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.543730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.543973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.544005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.544213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.544246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.544457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.544489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.544702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.544736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.544941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.544974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.545168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.545201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.545442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.545474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.545644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.545677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.545853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.545886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.546076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.546111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.546304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.546337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.546511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.546543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.546687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.546719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.546895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.546928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.547135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.547169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.547300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.547333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.547649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.547721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.547966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.727 [2024-12-06 11:29:17.548002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.727 qpair failed and we were unable to recover it. 00:27:44.727 [2024-12-06 11:29:17.548141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.548178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.548304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.548336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.548545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.548577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.548874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.548907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.549086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.549120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.549387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.549420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.549554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.549586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.549708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.549741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.549877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.549909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.550184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.550218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.550393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.550426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.550729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.550762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.550969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.551003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.551202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.551236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.551365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.551398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.551515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.551547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.551733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.551768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.551937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.551969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.552146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.552179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.552450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.552482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.552670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.552703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.552974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.553006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.553215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.553249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.553420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.553453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.553724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.553756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.553875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.553914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.554128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.554163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.554392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.554425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.554527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.554559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.554733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.554766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.554954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.554988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.555168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.555202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.555332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-12-06 11:29:17.555365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.728 qpair failed and we were unable to recover it. 00:27:44.728 [2024-12-06 11:29:17.555560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.555594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.555768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.555801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.556091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.556126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.556392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.556425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.556549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.556583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.556849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.556881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.557080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.557115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.557234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.557267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.557444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.557477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.557660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.557694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.557876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.557908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.558125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.558160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.558367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.558400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.558590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.558623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.558896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.558929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.559039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.559082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.559268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.559300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.559474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.559508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.559773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.559806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.559994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.560034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.560223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.560256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.560432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.560464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.560663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.560695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.560890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.560923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.561098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.561132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.561304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.561338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.561526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.561559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.729 [2024-12-06 11:29:17.561742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-12-06 11:29:17.561774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.729 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.562045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.562097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.562215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.562248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.562428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.562461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.562568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.562600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.562793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.562826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.563106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.563140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.563330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.563363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.563632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.563665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.563874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.563905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.564118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.564153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.564344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.564376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.564657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.564689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.564892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.564925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.565109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.565143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.565324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.565356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.565553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.565587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.565712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.565746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.565921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.565954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.566077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.566111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.566303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.566337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.566522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.566554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.566727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.566761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.566936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.566969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.567112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.567146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.567418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.567451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.567578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.567610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.567808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.567841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.568022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.568055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.568251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.568284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.568459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.568492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.568678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.568711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.568983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.569016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.569289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-12-06 11:29:17.569327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.730 qpair failed and we were unable to recover it. 00:27:44.730 [2024-12-06 11:29:17.569429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.569461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.569758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.569790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.569962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.569995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.570201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.570235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.570364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.570397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.570581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.570614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.570859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.570892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.571135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.571170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.571357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.571390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.571518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.571551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.571750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.571782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.571960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.571994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.572239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.572272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.572497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.572529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.572649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.572682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.572876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.572909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.573090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.573124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.573365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.573398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.573666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.573700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.573886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.573918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.574101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.574133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.574403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.574435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.574624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.574657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.574830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.574862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.575081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.575117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.575231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.575264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.575434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.575472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.575589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.575623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.575881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.575913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.576100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.576133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.576308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.576341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.576513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.576546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.576815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.576848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.731 [2024-12-06 11:29:17.576975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.731 [2024-12-06 11:29:17.577007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.731 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.577214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.577248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.577367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.577399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.577492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.577523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.577691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.577724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.577922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.577955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.578198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.578232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.578440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.578473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.578658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.578690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.578872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.578904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.579019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.579052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.579347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.579381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.579483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.579516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.579708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.579740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.579858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.579890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.580133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.580167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.580409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.580441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.580634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.580667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.580867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.580900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.581144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.581178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.581357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.581390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.581534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.581567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.581695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.581729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.581995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.582028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.582150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.582184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.582452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.582485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.582696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.582729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.582921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.582954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.583217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.583251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.583524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.583558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.583750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.583782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.583904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.583937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.584179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.584213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.584414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.732 [2024-12-06 11:29:17.584446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.732 qpair failed and we were unable to recover it. 00:27:44.732 [2024-12-06 11:29:17.584635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.584673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.584799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.584832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.584935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.584968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.585157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.585191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.585295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.585327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.585538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.585571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.585672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.585705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.585892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.585926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.586182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.586215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.586523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.586556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.586744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.586777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.586905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.586937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.587105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.587138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.587383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.587415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.587623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.587656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.587775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.587806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.587990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.588024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.588223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.588257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.588467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.588501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.588755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.588788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.589048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.589091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.589203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.589234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.589355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.589388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.589574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.589607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.589849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.589881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.590092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.590126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.590260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.590293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.590539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.590572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.590756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.590788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.590915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.590948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.591222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.591257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.591494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.591527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.591664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.591697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.733 [2024-12-06 11:29:17.591831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.733 [2024-12-06 11:29:17.591864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.733 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.592101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.592135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.592316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.592350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.592474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.592507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.592726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.592759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.592944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.592979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.593158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.593194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.593327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.593360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.593597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.593670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.593937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.593974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.594157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.594193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.594502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.594536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.594659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.594692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.594805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.594837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.595098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.595132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.595306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.595339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.595581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.595614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.595865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.595898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.596036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.596079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.596195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.596227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.596501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.596698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.596739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.596859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.596891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.597072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.597107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.597306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.597338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.597522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.597555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.597809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.597841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.598030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.598074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.598262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.598295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.598535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.598567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.598681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.734 [2024-12-06 11:29:17.598715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.734 qpair failed and we were unable to recover it. 00:27:44.734 [2024-12-06 11:29:17.598890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.598923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.599026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.599069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.599343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.599376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.599644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.599676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.599809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.599842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.600032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.600076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.600259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.600291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.600461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.600494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.600760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.600792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.600917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.600949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.601139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.601172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.601399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.601432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.601561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.601594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.601787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.601820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.602025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.602066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.602240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.602273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.602447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.602481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.602679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.602717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1888225 Killed "${NVMF_APP[@]}" "$@" 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.603024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.603056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.603251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.603283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:44.735 [2024-12-06 11:29:17.603500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.603533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.603724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.603757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:44.735 [2024-12-06 11:29:17.603887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.603920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.604126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.604160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:44.735 [2024-12-06 11:29:17.604302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.604335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.604609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.604642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.735 [2024-12-06 11:29:17.604884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.604916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.735 [2024-12-06 11:29:17.605097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.605132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.605381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.605414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.605616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.605649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.605844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.605876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.606070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.606104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.606322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.606354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.606537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.735 [2024-12-06 11:29:17.606570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.735 qpair failed and we were unable to recover it. 00:27:44.735 [2024-12-06 11:29:17.606784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.736 [2024-12-06 11:29:17.606816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:44.736 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.607032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.607073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.607277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.607310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.607548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.607580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.607782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.607815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.608033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.608074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.608192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.608221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.608325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.608361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.608556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.608587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.608782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.608815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.609027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.609070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.609345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.609379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.609591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.609624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.609806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.609839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.610027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.610068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.610276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.610309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.610427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.610459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.610704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.610738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.610947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.610981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.611158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.611193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.611367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.611399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 [2024-12-06 11:29:17.611686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.611720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1889048 00:27:45.031 [2024-12-06 11:29:17.611913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.031 [2024-12-06 11:29:17.611946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.031 qpair failed and we were unable to recover it. 00:27:45.031 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1889048 00:27:45.031 [2024-12-06 11:29:17.612134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.612168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:45.032 [2024-12-06 11:29:17.612315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.612350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.612530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1889048 ']' 00:27:45.032 [2024-12-06 11:29:17.612564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.612839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.032 [2024-12-06 11:29:17.612871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.613163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.032 [2024-12-06 11:29:17.613197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.613386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.613420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.032 [2024-12-06 11:29:17.613533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.613566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.032 [2024-12-06 11:29:17.613790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.613824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.032 [2024-12-06 11:29:17.614094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.614129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.614254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.614287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.614533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.614565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.614805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.614838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.615051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.615095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.615363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.615398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.615527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.615560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.615818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.615855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.616046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.616089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.616226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.616259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.616368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.616402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.616585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.616624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.616814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.616847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.617032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.617088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.617208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.617240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.617359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.617392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.617598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.617631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.617927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.617962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.618074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.618110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.618322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.618355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.618625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.032 [2024-12-06 11:29:17.618657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.032 qpair failed and we were unable to recover it. 00:27:45.032 [2024-12-06 11:29:17.618850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.618884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.619085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.619119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.619355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.619387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.619574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.619607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.619748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.619781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.619958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.619991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.620109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.620142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.620335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.620367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.620565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.620599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.620709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.620741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.620987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.621020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.621225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.621259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.621376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.621408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.621580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.621613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.621814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.621847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.622122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.622156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.622263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.622303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.622519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.622592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.622799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.622837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.623029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.623074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.623345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.623380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.623568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.623600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.623726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.623758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.623950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.623983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.624248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.624282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.624402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.033 [2024-12-06 11:29:17.624434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.033 qpair failed and we were unable to recover it. 00:27:45.033 [2024-12-06 11:29:17.624570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.624603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.624732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.624765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.624972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.625005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.625215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.625251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.625493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.625527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.625791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.625824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.626043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.626089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.626203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.626236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.626344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.626377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.626486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.626519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.626731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.626765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.626874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.626907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.627083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.627118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.627293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.627326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.627506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.627539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.627674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.627708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.627907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.627941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.628044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.628086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.628270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.628308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.628495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.628528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.628650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.628683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.628811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.628844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.629043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.629087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.629210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.629244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.629368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.629401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.034 qpair failed and we were unable to recover it. 00:27:45.034 [2024-12-06 11:29:17.629522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.034 [2024-12-06 11:29:17.629555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.629675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.629707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.629831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.629863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.630109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.630144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.630325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.630359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.630546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.630578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.630793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.630825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.630938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.630971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.631152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.631186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.631374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.631407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.631598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.631632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.631897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.631930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.632133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.632167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.632356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.632390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.632506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.632540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.632654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.632687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.632868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.632901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.633024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.633067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.633317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.633351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.633475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.633508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.633687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.633721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.633912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.633946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.634083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.634118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.634414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.634449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.634558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.634591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.634771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.634805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.634999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.635032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.635188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.635222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.635428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.635461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.635773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.635808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.635936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.635970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.636215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.636250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.636526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.636560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.636676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.035 [2024-12-06 11:29:17.636710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.035 qpair failed and we were unable to recover it. 00:27:45.035 [2024-12-06 11:29:17.636898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.636937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.637194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.637230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.637359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.637393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.637496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.637529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.637701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.637735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.637859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.637894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.638149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.638185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.638364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.638399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.638585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.638619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.638746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.638779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.638892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.638925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.639140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.639175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.639364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.639397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.639584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.639617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.639804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.639839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.640017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.640051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.640256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.640288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.640472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.640506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.640719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.640753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.640867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.640901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.641099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.641134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.641439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.641472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.641745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.641778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.641887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.641921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.642103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.642138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.642322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.642356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.642480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.642513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.642726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.642765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.642869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.642903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.036 [2024-12-06 11:29:17.643201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.036 [2024-12-06 11:29:17.643237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.036 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.643360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.643393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.643506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.643539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.643662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.643696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.643889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.643923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.644092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.644127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.644253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.644286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.644460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.644494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.644740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.644773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.645016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.645049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.645220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.645254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.645367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.645400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.645638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.645710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.645850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.645887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.645999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.646033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.646227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.646261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.646491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.646525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.646709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.646741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.646987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.647020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.647153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.647187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.647450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.647483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.647606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.647639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.647852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.647887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.647998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.648031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.648297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.648329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.648544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.648586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.648697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.648730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.648915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.648947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.649133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.649167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.649410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.649443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.649634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.649668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.649801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.649833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.037 [2024-12-06 11:29:17.650082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.037 [2024-12-06 11:29:17.650117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.037 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.650234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.650266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.650387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.650421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.650608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.650641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.650830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.650862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.651035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.651077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.651350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.651385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.651523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.651556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.651747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.651781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.651967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.652001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.652197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.652231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.652509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.652541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.652675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.652710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.652958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.652991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.653149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.653183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.653396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.653428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.653645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.653679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.653798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.653830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.653959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.653992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.654182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.654216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.654454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.654526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.654711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.654780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.655004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.655041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.655176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.655211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.655420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.655453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.655650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.655683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.655866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.655900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.656085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.656130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.656240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.656273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.656401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.656434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.656621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.656655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.656865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.656899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.657042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.657107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.657229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.657262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.038 qpair failed and we were unable to recover it. 00:27:45.038 [2024-12-06 11:29:17.657499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.038 [2024-12-06 11:29:17.657532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.657708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.657741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.657844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.657877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.658081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.658116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.658292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.658325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.658440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.658472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.658584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.658618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.658749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.658782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.659074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.659108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.659286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.659319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.659568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.659601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.659732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.659765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.659952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.659985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.660122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.660162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.660439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.660472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.660593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.660626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.660815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.660847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.661033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.661075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 [2024-12-06 11:29:17.661073] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.661112] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.039 [2024-12-06 11:29:17.661262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.661292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.661562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.661592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.661707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.661738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.661925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.661956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.662079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.662112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.662383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.662416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.662662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.662694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.662867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.662899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.663025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.663068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.663259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.663291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.663469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.663502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.663695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.663727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.663839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.663872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.664046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.664090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.664293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.039 [2024-12-06 11:29:17.664326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.039 qpair failed and we were unable to recover it. 00:27:45.039 [2024-12-06 11:29:17.664500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.664532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.664660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.664693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.664820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.664853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.665087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.665120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.665423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.665456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.665661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.665694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.665907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.665946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.666153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.666187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.666310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.666343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.666520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.666553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.666850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.666884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.667178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.667212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.667409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.667443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.667660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.667693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.667875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.667909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.668017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.668052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.668186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.668219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.668334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.668367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.668560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.668593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.668770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.668803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.669004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.669038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.669263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.669298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.669481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.669513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.669631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.669664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.669835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.669869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.670048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.670093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.670279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.670312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.670502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.670535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.670708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.670742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.670928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.670960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.671078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.671113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.671357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.671389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.671499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.671533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.671804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.040 [2024-12-06 11:29:17.671843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.040 qpair failed and we were unable to recover it. 00:27:45.040 [2024-12-06 11:29:17.672136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.672171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.672360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.672393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.672521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.672553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.672811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.672845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.672975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.673008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.673248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.673284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.673471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.673504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.673612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.673645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.673775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.673808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.673927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.673961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.674086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.674122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.674300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.674333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.674512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.674545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.674836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.674878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.675084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.675120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.675414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.675446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.675652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.675684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.675906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.675939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.676136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.676170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.676279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.676311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.676496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.676529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.676725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.676758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.676871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.676903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.677032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.677075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.677320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.677352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.677484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.677516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.677702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.677744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.677865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.677897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.678079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.678113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.678336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.678368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.678553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.678585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.678761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.678794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.678975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.679007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-12-06 11:29:17.679291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-12-06 11:29:17.679324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.679455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.679487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.679687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.679719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.679936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.679969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.680080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.680114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.680235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.680268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.680389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.680422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.680554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.680587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.680770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.680978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.681011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.681192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.681226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.681347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.681378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.681504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.681535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.681645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.681678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.681854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.681885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.682070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.682104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.682349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.682381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.682586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.682619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.682826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.682858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.683098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.683132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.683359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.683408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.683598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.683633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.683822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.683856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.683987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.684021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.684230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.684264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.684534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.684567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.684693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.684725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.684840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.684875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.685006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.685038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-12-06 11:29:17.685342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-12-06 11:29:17.685375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.685621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.685653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.685830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.685864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.686119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.686154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.686357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.686399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.686594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.686627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.686912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.686944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.687179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.687213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.687336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.687368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.687492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.687524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.687819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.687852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.688137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.688171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.688350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.688382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.688651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.688684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.688867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.688900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.689155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.689190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.689320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.689353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.689591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.689624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.689815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.689849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.690035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.690081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.690190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.690223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.690428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.690461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.690740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.690774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.690963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.690995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.691196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.691231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.691344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.691377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.691512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.691545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.691728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.691760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.691929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.691962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.692079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.692114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.692242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.692275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.692534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.692577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.692705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.692739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.692854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-12-06 11:29:17.692887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-12-06 11:29:17.693186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.693222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.693338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.693371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.693589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.693622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.693738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.693771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.693952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.693985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.694255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.694291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.694566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.694599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.694789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.694823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.695097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.695131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.695316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.695349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.695536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.695576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.695768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.695801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.695918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.695952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.696199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.696233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.696422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.696455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.696562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.696595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.696780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.696813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.697031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.697077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.697296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.697328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.697440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.697473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.697594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.697627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.697810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.697843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.698024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.698069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.698259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.698293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.698581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.698613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.698743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.698776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.699026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.699069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.699273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.699306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.699429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.699462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.699708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.699741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.699942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.699974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.700087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.700124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.700235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.700269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-12-06 11:29:17.700448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-12-06 11:29:17.700480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.700691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.700724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.700935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.700968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.701156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.701190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.701396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.701441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.701621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.701655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.701773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.701807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.701922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.701957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.702206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.702242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.702433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.702466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.702753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.702785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.702964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.702997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.703268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.703302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.703412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.703446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.703703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.703735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.703861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.703895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.704078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.704113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.704285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.704318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.704575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.704609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.704798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.704831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.704961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.704993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.705123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.705157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.705285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.705318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.705530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.705563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.705751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.705784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.705959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.705993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.706266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.706301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.706576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.706608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.706727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.706760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.706886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.706921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.707141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.707175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.707426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.707465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.707717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.707749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.708079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-12-06 11:29:17.708113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-12-06 11:29:17.708361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.708395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.708522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.708554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.708748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.708782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.708961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.708995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.709122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.709157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.709297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.709331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.709602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.709635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.709815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.709848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.709970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.710003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.710298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.710332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.710514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.710547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.710665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.710699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.710884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.710917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.711029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.711072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.711189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.711222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.711407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.711440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.711630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.711664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.711780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.711813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.712000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.712033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.712222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.712256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.712377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.712410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.712609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.712642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.712743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.712776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.712972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.713005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.713200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.713234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.713427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.713460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.713665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.713698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.713822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.713856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.714131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.714166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.714401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.714435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.714615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.714648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.714855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.714888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.715129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.715163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.715342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-12-06 11:29:17.715376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-12-06 11:29:17.715555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.715589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.715705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.715738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.715955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.715987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.716191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.716224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.716364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.716410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.716526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.716559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.716857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.716890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.717099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.717135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.717321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.717354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.717532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.717565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.717673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.717706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.717834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.717869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.718072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.718106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.718292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.718325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.718433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.718467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.718710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.718742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.718927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.718961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.719094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.719128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.719255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.719288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.719391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.719423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.719691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.719724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.719938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.719972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.720217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.720251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.720465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.720499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.720615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.720648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.720823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.720856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.721161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.721195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.721464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.721497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.721689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.721722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.721839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-12-06 11:29:17.721873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-12-06 11:29:17.722050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.722093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.722275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.722307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.722483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.722517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.722618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.722652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.722817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.722850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.723015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.723048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.723300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.723334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.723450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.723483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.723612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.723645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.723925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.723958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.724152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.724186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.724372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.724404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.724679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.724711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.724972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.725004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.725113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.725153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.725423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.725456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.725583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.725616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.725727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.725759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.725888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.725920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.726179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.726213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.726400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.726432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.726552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.726584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.726826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.726859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.726990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.727022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.727203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.727236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.727421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.727454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.727700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.727732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.727914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.727946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.728072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.728107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.728238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.728271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.728388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.728421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.728600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.728633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.728805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.728838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-12-06 11:29:17.729083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-12-06 11:29:17.729117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.729232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.729266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.729447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.729480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.729670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.729702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.729827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.729860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.730147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.730181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.730377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.730410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.730529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.730561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.730737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.730771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.731015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.731047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.731299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.731331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.731591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.731623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.731835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.731867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.731994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.732027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.732273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.732313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.732440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.732474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.732605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.732638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.732831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.732864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.732975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.733008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.733136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.733171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.733301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.733334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.733463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.733502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.733792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.733827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.734011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.734044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.734243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.734277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.734467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.734500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.734629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.734661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.734860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.734896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.735113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.735148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.735271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.735304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.735579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.735612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.735742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.735776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.735805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.049 [2024-12-06 11:29:17.736019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.736053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-12-06 11:29:17.736266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-12-06 11:29:17.736301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.736504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.736544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.736730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.736764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.737002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.737036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.737172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.737207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.737456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.737489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.737662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.737697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.737913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.737947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.738154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.738188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.738450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.738483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.738674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.738707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.738905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.738938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.739210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.739245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.739439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.739472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.739714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.739749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.739971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.740005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.740289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.740325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.740515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.740549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.740798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.740832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.741004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.741039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.741305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.741339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.741521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.741554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.741797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.741830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.742046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.742091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.742293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.742327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.742503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.742536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.742754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.742788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.742974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.743009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.743166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.743201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.743380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.743415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.743527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.743560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.743806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.743839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.744024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.744069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.744368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-12-06 11:29:17.744404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-12-06 11:29:17.744603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.744637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.744830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.744864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.745080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.745115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.745318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.745353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.745541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.745575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.745823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.745858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.746155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.746191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.746462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.746503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.746641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.746675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.746864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.746898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.747091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.747125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.747229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.747263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.747444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.747477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.747664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.747697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.747809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.747842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.748105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.748139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.748327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.748361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.748488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.748522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.748723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.748756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.749025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.749078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.749257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.749290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.749556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.749589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.749703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.749736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.749921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.749954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.750140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.750176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.750320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.750352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.750538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.750572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.750761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.750795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.750980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.751013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.751141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.751175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.751424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.751457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.751629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.751662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.751906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-12-06 11:29:17.751939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-12-06 11:29:17.752074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.752108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.752313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.752347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.752463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.752496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.752756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.752790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.752893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.752926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.753196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.753230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.753455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.753488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.753685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.753718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.753912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.753946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.754132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.754167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.754290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.754324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.754532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.754565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.754748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.754782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.754957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.754991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.755121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.755160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.755432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.755466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.755740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.755775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.755903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.755935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.756065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.756100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.756286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.756320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.756437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.756471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.756646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.756679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.756828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.756861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.756979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.757012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.757252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.757287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.757466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.757500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.757675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.757708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.757982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.758016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.758332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.758367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-12-06 11:29:17.758539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-12-06 11:29:17.758572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.758757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.758790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.759069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.759103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.759238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.759271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.759542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.759576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.759850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.759882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.760072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.760106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.760290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.760324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.760458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.760491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.760762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.760796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.760988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.761022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.761288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.761322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.761453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.761487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.761805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.761838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.762025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.762068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.762276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.762309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.762481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.762514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.762620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.762652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.762829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.762862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.763047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.763091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.763277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.763311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.763502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.763536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.763647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.763679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.763946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.763979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.764113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.764147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.764259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.764303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.764424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.764457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.764655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.764688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.764877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.764910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.765090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.765124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.765322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.765355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.765489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.765523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.765698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-12-06 11:29:17.765731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-12-06 11:29:17.765906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.765939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.766071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.766106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.766240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.766273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.766397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.766430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.766699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.766733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.766977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.767010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.767151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.767186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.767377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.767410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.767616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.767648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.767915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.767949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.768222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.768257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.768475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.768508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.768754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.768787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.769034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.769076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.769271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.769303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.769437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.769469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.769718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.769751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.769923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.769957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.770203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.770238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.770429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.770462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.770749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.770782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.771082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.771116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.771291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.771324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.771434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.771468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.771716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.771750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.772000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.772037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.772327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.772361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.772483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.772516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.772689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.772723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.772911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.772947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.773095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.773131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.773322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.773355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.773567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-12-06 11:29:17.773609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-12-06 11:29:17.773803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.773838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.773971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.774004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.774126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.774161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.774284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.774317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.774336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.055 [2024-12-06 11:29:17.774367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.055 [2024-12-06 11:29:17.774373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.055 [2024-12-06 11:29:17.774380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.055 [2024-12-06 11:29:17.774384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.055 [2024-12-06 11:29:17.774438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.774470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.774640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.774671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.774855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.774887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.775079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.775114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.775264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.775297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.775513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.775546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.775727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.775760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.775943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:45.055 [2024-12-06 11:29:17.776014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.776048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.776056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:45.055 [2024-12-06 11:29:17.776169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:45.055 [2024-12-06 11:29:17.776244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.776170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:45.055 [2024-12-06 11:29:17.776277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.776525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.776557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.776744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.776778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.776902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.776935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.777070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.777105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.777291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.777324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.777499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.777532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.777776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.777809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.778020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.778053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.778237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.778270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.778442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.778475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.778751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.778784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.778988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.779022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.779175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.779210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.779399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.779432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.779677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.779710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.779829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.779862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.779962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-12-06 11:29:17.779996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-12-06 11:29:17.780210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.780245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.780420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.780453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.780670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.780703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.780894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.780928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.781053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.781099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.781215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.781247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.781516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.781556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.781675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.781710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.781883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.781915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.782037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.782079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.782330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.782364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.782550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.782582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.782685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.782718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.782896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.782930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.783174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.783210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.783384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.783417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.783601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.783636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.783910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.783942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.784212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.784246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.784503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.784538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.784696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.784730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.784900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.784933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.785079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.785114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.785296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.785329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.785435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.785467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.785600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.785635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.785819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.785853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.785980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.786013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.786188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.786222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.786402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.786437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.786558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.786591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.786770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.786803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.786926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.786959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-12-06 11:29:17.787231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-12-06 11:29:17.787266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.787535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.787570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.787672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.787705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.787888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.787920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.788165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.788201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.788322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.788356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.788482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.788515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.788788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.788822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.788995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.789029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.789224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.789259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.789533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.789566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.789751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.789786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.789903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.789937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.790110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.790153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.790398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.790432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.790626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.790659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.790788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.790822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.791114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.791152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.791406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.791440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.791558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.791594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.791787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.791821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.792033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.792077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.792207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.792242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.792369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.792405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.792649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.792683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.792875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.792911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.793128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.793164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.793366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.793401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.793576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.793611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.793819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.793853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-12-06 11:29:17.794031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-12-06 11:29:17.794073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.794200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.794234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.794533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.794569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.794749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.794783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.794909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.794943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.795140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.795176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.795297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.795331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.795455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.795489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.795622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.795657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.795844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.795877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.796079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.796113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.796300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.796334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.796589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.796624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.796757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.796791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.797073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.797109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.797293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.797326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.797440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.797474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.797670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.797704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.797883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.797916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.798045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.798088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.798279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.798314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.798507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.798540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.798806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.798840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.799024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.799086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.799215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.799249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.799438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.799472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.799658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.799692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.799869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.799903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.800079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.800115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.800243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.800277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.800461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.800497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.800763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.800797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.800981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.801015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.058 [2024-12-06 11:29:17.801206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.058 [2024-12-06 11:29:17.801240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.058 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.801496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.801531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.801712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.801747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.801948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.801983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.802106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.802141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.802270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.802304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.802497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.802533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.802711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.802745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.802924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.802958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.803090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.803123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.803234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.803268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.803494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.803528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.803720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.803752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.803925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.803959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.804077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.804111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.804216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.804250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.804428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.804462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.804588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.804622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.804808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.804841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.804963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.804997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.805182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.805218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.805406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.805439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.805712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.805747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.805868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.805902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.806082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.806118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.806323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.806358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.806531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.806565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.806746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.806780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.806982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.807017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.807149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.807184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.807315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.807362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.807577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.807610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.807719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.807754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.807998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.808032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.059 [2024-12-06 11:29:17.808150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.059 [2024-12-06 11:29:17.808183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.059 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.808311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.808344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.808622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.808655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.808841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.808875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.809154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.809190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.809438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.809473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.809591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.809625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.809924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.809957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.810203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.810239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.810423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.810456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.810656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.810690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.810875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.810909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.811009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.811043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.811243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.811276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.811457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.811490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.811686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.811718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.812011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.812044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.812299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.812333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.812464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.812497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.812626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.812659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.812829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.812863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.813032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.813074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.813335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.813370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.813487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.813520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.813709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.813743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.813950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.813984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.814161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.814196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.814443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.814477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.814594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.814628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.814841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.814874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.815090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.815126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.815253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.815288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.815461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.815495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.815605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.060 [2024-12-06 11:29:17.815641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.060 qpair failed and we were unable to recover it. 00:27:45.060 [2024-12-06 11:29:17.815769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.815803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.815998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.816034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.816182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.816222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.816342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.816376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.816557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.816591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.816852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.816886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.816989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.817023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.817236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.817272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.817449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.817483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.817669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.817703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.817883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.817917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.818191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.818227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.818445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.818481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.818596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.818630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.818807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.818842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.819068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.819103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.819222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.819256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.819436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.819469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.819641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.819675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.819791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.819825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.820077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.820113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.820232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.820266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.820447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.820482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.820673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.820708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.820901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.820937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.821123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.821159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.821372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.821406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.821584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.821618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.821791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.821825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.822009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.822045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.822363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.822399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.822644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.822677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.822897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.822932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.823099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.061 [2024-12-06 11:29:17.823135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.061 qpair failed and we were unable to recover it. 00:27:45.061 [2024-12-06 11:29:17.823246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.823277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.823465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.823500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.823615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.823649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.823855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.823888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.824159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.824195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.824475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.824509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.824631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.824664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.824933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.824967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.825215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.825256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.825385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.825420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.825556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.825591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.825764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.825799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.825903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.825936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.826130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.826167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.826354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.826387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.826576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.826610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.826818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.826852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.826982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.827016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.827236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.827272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.827462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.827495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.827602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.827636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.827754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.827788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.827911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.827945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.828050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.828098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.828285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.828321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.828514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.828547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.828727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.828760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.828947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.828982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.829093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.829128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.829300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.062 [2024-12-06 11:29:17.829333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.062 qpair failed and we were unable to recover it. 00:27:45.062 [2024-12-06 11:29:17.829453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.829488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.829737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.829772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.830022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.830056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.830254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.830288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.830465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.830505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.830722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.830757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.830959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.830992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.831116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.831151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.831345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.831379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.831549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.831582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.831775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.831809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.831929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.831963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.832087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.832124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.832341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.832373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.832649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.832683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.832985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.833019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.833237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.833272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.833458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.833491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.833677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.833716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.833978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.834013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.834233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.834267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.834400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.834433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.834638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.834672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.834849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.834888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.835014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.835048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.835251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.835284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.835494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.835526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.835695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.835728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.835943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.835976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.836103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.836138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.836319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.836351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.836482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.836514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.836835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.836868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.063 [2024-12-06 11:29:17.837039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.063 [2024-12-06 11:29:17.837082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.063 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.837276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.837309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.837487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.837520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.837783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.837815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.838011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.838044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.838189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.838223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.838398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.838430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.838680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.838712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.838961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.838994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.839191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.839225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.839407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.839439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.839612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.839644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.839958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.840028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.840371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.840424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.840625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.840659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.840874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.840907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.841044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.841093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.841288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.841321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.841532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.841565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.841813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.841846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.841965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.841998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.842127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.842179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.842383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.842416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.842721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.842753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.842883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.842917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.843035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.843080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.843291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.843325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.843621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.843653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.843850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.843883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.844130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.844164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.844471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.844504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.844613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.844645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.844773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.844806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.845007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.845040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.064 qpair failed and we were unable to recover it. 00:27:45.064 [2024-12-06 11:29:17.845224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.064 [2024-12-06 11:29:17.845257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.845381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.845414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.845583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.845616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.845737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.845769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.845941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.845974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.846249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.846288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.846462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.846494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.846773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.846807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.847099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.847133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.847320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.847353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.847548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.847582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.847771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.847804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.847919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.847951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.848078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.848112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.848375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.848408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.848510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.848543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.848731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.848764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.848930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.848963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.849178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.849211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.849396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.849431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.849565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.849598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.849794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.849827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.850080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.850115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.850333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.850365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.850499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.850532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.850657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.850691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.850957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.850990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.851106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.851141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.851262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.851296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.851394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.851426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.851611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.851645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.851826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.851860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.852032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.852080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.852264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.065 [2024-12-06 11:29:17.852295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.065 qpair failed and we were unable to recover it. 00:27:45.065 [2024-12-06 11:29:17.852466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.852499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.852771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.852804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.852980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.853013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.853229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.853264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.853552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.853585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.853756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.853789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.854081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.854116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.854261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.854295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.854433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.854466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.854585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.854617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.854753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.854786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.854968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.855001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.855159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.855211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.855400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.855434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.855540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.855573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.855841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.855875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.856075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.856110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.856373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.856406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.856638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.856671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.856882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.856915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.857023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.857056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.857209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.857243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.857516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.857549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.857751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.857783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.857960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.857993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.858119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.858162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.858281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.858313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.858485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.858518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.858673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.858705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.858950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.858982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.859226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.859259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.859555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.859588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.859777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.859810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.860089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.066 [2024-12-06 11:29:17.860124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.066 qpair failed and we were unable to recover it. 00:27:45.066 [2024-12-06 11:29:17.860387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.860419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.860602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.860636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.860888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.860921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.861123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.861156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.861360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.861394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.861598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.861631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.861849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.861882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.862084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.862118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.862246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.862280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.862468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.862500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.862775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.862810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.862938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.862970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.863095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.863129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.863302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.863335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.863524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.863557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.863770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.863802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.864078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.864113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.864305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.864337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.864536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.864574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.864757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.864791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.864968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.865002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.865146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.865180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.865358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.865391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.865596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.865629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.865897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.865929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.866192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.866228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.866354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.866386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.866582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.866615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.866900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.866934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.067 [2024-12-06 11:29:17.867049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.867093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 [2024-12-06 11:29:17.867279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.067 [2024-12-06 11:29:17.867312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.067 qpair failed and we were unable to recover it. 00:27:45.067 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:45.068 [2024-12-06 11:29:17.867508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.867542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.867727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.867760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.867943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.867976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.068 [2024-12-06 11:29:17.868181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.868218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.868338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.868371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.868545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.868578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.868779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.868814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.869085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.869121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.869306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.869339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.869571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.869604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.869709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.869743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.869948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.869981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.870127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.870161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.870277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.870311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.870425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.870458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.870631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.870664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.870785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.870818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.870943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.870977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.871176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.871212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.871489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.871523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.871817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.871852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.872030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.872076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.872286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.872319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.872525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.872558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.872691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.872724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.872858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.872892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.873129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.873164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.873346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.873379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.873572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.873605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.873733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.873766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.873889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.068 [2024-12-06 11:29:17.873923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.068 qpair failed and we were unable to recover it. 00:27:45.068 [2024-12-06 11:29:17.874160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.874195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.874387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.874420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.874525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.874559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.874737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.874770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.874975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.875009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.875163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.875197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.875368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.875401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.875590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.875634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.875756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.875790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.875915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.875947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.876069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.876104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.876243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.876276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.876407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.876440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.876562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.876596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.876785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.876820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.876941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.876975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.877166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.877201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.877404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.877438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.877645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.877679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.877790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.877824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.878039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.878084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.878226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.878260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.878492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.878525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.878716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.878749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.878936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.878968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.879085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.879121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.879368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.879401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.879573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.879607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.879734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.879767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.879874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.879907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.880051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.880107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.880318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.880353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.880565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.880599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.880785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.069 [2024-12-06 11:29:17.880819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.069 qpair failed and we were unable to recover it. 00:27:45.069 [2024-12-06 11:29:17.880948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.880987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.881181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.881215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.881357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.881390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.881606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.881639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.881750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.881784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.881899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.881933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.882037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.882083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.882329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.882363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.882498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.882530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.882659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.882692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.882809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.882843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.883021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.883053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.883254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.883287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.883412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.883450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.883646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.883679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.883850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.883885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.884015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.884048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.884191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.884225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.884343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.884376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.884489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.884523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.884770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.884803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.884979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.885012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.885211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.885245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.885370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.885403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.885645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.885679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.885859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.885892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.886004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.886037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.886188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.886221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.886425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.886457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.886702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.886736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.886906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.886940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.887126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.887159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.887287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.887320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.070 [2024-12-06 11:29:17.887423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.070 [2024-12-06 11:29:17.887456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.070 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.887566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.887599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.887712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.887747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.887870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.887904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.888009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.888043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.888169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.888204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.888340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.888373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.888487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.888529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.888714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.888747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.888932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.888966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.889095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.889130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.889325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.889357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.889472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.889505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.889621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.889653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.889906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.889940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.890073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.890109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.890227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.890260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.890457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.890490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.890599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.890632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.890742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.890775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.890880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.890922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.891163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.891198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.891403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.891436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.891619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.891652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.891778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.891811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.891915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.891948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.892160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.892195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.892390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.892423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.892541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.892574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.892686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.892720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.892842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.892875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.892996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.893029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.893173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.893206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.893333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.893366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.071 [2024-12-06 11:29:17.893592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.071 [2024-12-06 11:29:17.893625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.071 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.893734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.893767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.894015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.894049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.894176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.894207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.894317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.894351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.894461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.894494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.894616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.894650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.894771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.894803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.894952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.895004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.895141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.895172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.895287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.895317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.895433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.895464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.895578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.895608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.895864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.895937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.896083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.896119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.896241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.896275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.896397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.896431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.896548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.896581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.896689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.896721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.896861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.896895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.897078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.897113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.897357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.897390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.897499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.897533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.897714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.897745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.897944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.897977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.898102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.898136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.898363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.898401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.898595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.898628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.898740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.898773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.898889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.898922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.899051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.899098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.899231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.899264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.899453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.899487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.899599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.072 [2024-12-06 11:29:17.899632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.072 qpair failed and we were unable to recover it. 00:27:45.072 [2024-12-06 11:29:17.899747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.899780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.899983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.900016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.900223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.900257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.900376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.900408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.900510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.900543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.900647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.900680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.900884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.900918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.901106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.901141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.901254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.901287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.901397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.901430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.901553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.901585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.901690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.901723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.901841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.901874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.901991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.902025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.902143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.902177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.902284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.902316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.902488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.902521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.902625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.902657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.902773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.902805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.902932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.902974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.903163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.903198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.903305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.903338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.903453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.903487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.903602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.903635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.903751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.903787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.903906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.903939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.904071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.904105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.904286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.904320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.073 [2024-12-06 11:29:17.904437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.073 [2024-12-06 11:29:17.904470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.073 qpair failed and we were unable to recover it. 00:27:45.073 [2024-12-06 11:29:17.904593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.904625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.904749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.904782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.904891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.904930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.905051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.905096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.074 [2024-12-06 11:29:17.905219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.905253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.905383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.905415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.074 [2024-12-06 11:29:17.905527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.905560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.905750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.905783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.905896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.905928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.906053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.906099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.906299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.906332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.906450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.906483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.906604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.906637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.906748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.906780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.906883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.906915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.907099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.907134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.907267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.907301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.907425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.907458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.907687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.907720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.907830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.907863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.907986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.908019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.908151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.908185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.908307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.908339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.908459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.908492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.908669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.908701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.908877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.908910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.909018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.909051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.909245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.909277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.909385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.909421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.909550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.909587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.909705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.909738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.909923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.909956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.910093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.910126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.910233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.910265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.910374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.074 [2024-12-06 11:29:17.910406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.074 qpair failed and we were unable to recover it. 00:27:45.074 [2024-12-06 11:29:17.910513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.910545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.910654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.910686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.910801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.910833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.910946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.910978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.911165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.911199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.911307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.911340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.911518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.911551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.911729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.911762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.911869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.911901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.912004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.912037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.912233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.912267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.912444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.912476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.912661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.912694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.912815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.912847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.913043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.913087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.913201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.913233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.913416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.913448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.913627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.913659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.913768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.913800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.913917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.913950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.914232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.914275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.914390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.914423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.914539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.914572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.914683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.914715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.914818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.914851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.915026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.915071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.915258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.915292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.915401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.915434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.915553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.915585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.915831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.915864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.915973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.916005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.916202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.916235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.916429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.916463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.916571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.916610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.916739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.916772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.916882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.916915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.075 [2024-12-06 11:29:17.917089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.075 [2024-12-06 11:29:17.917123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.075 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.917247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.917280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.917382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.917415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.917536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.917570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.917673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.917706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.917812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.917844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.917974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.918006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.918125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.918159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.918255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.918288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.918400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.918432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.918536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.918570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.918753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.918787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.918958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.918990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.919097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.919131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.919304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.919337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.919447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.919482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.919672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.919705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.919812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.919845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.920032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.920075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.920183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.920216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.920387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.920420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.920597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.920630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.920738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.920770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.920873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.920907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.921096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.921130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.921255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.921290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.921411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.921445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.921632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.921665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.921900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.921933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.922054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.922099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.922203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.922235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.922426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.922458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.922560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.922593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.922695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.922727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.922912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.922944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.923119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.923153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.923269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.076 [2024-12-06 11:29:17.923303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.076 qpair failed and we were unable to recover it. 00:27:45.076 [2024-12-06 11:29:17.923478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.923517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.923709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.923741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.923849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.923882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.923989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.924021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.924213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.924246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.924424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.924456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.924633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.924665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.924842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.924875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.924987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.925019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.925150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.925185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.925305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.925338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.925450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.925482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.925774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.925806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.925925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.925957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.926079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.926114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.926220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.926252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.926445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.926477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.926651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.926684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.926801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.926833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.927020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.927052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.927187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.927221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.927336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.927370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.927612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.927645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.927903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.927935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.928123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.928157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.928261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.928293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.928418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.928451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.928579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.928612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.928786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.928818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.929081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.929115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.929294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.929327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.929568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.929602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.929776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.929809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.077 [2024-12-06 11:29:17.929911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.077 [2024-12-06 11:29:17.929943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.077 qpair failed and we were unable to recover it. 00:27:45.078 [2024-12-06 11:29:17.930194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.078 [2024-12-06 11:29:17.930229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.078 qpair failed and we were unable to recover it. 00:27:45.078 [2024-12-06 11:29:17.930356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.078 [2024-12-06 11:29:17.930389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.078 qpair failed and we were unable to recover it. 00:27:45.078 [2024-12-06 11:29:17.930490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.078 [2024-12-06 11:29:17.930523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.078 qpair failed and we were unable to recover it. 00:27:45.078 [2024-12-06 11:29:17.930788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.078 [2024-12-06 11:29:17.930820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.078 qpair failed and we were unable to recover it. 00:27:45.078 [2024-12-06 11:29:17.931141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.078 [2024-12-06 11:29:17.931175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.078 qpair failed and we were unable to recover it. 00:27:45.078 [2024-12-06 11:29:17.931350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.078 [2024-12-06 11:29:17.931382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.078 qpair failed and we were unable to recover it. 00:27:45.078 [2024-12-06 11:29:17.931498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.078 [2024-12-06 11:29:17.931537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.078 qpair failed and we were unable to recover it. 00:27:45.078 [2024-12-06 11:29:17.931646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.078 [2024-12-06 11:29:17.931680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.078 qpair failed and we were unable to recover it. 00:27:45.078 [2024-12-06 11:29:17.931791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.078 [2024-12-06 11:29:17.931824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.078 qpair failed and we were unable to recover it. 00:27:45.409 [2024-12-06 11:29:17.931929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.931962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.932080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.932115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.932218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.932252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.932378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.932412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.932657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.932690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.932859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.932893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.933007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.933040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.933169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.933204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.933321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.933355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.933534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.933567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.933846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.933879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.934070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.934104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.934221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.934266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.934514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.934547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.934739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.934773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.934889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.934922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.935039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.935083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.935194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.935226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.935399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.935433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.935634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.935666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.935841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.935875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.936055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.936099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.936290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.936324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.936428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.936461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.936598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.936633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.936842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.936875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.937050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.937096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.937424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.937459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.937732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.937765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.937948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.937981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.938170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.938204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.938480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.938513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.410 [2024-12-06 11:29:17.938642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.410 [2024-12-06 11:29:17.938675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.410 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.938787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.938820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.938944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.938978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.939093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.939127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.939304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.939337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.939607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.939646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.939770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.939803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.939915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.939948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.940077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.940112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.940238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.940271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.940461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.940493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.940740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.940773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.940974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.941005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.941189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.941224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.941339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.941371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.941552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.941585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.941777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.941809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.942071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.942106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.942338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.942371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 Malloc0 00:27:45.411 [2024-12-06 11:29:17.942611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.942645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.942753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.942787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.942986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.943019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.943153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.411 [2024-12-06 11:29:17.943187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.943376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.943408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.943525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.943558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.943751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.943783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.411 [2024-12-06 11:29:17.943955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.943990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.944123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.944157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.411 [2024-12-06 11:29:17.944365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.944398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.944501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.944533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.411 qpair failed and we were unable to recover it. 00:27:45.411 [2024-12-06 11:29:17.944736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-12-06 11:29:17.944769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.944874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.944906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.945083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.945116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.945235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.945269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.945389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.945421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.945528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.945560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.945808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.945842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.945959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.945992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.946166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.946200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.946309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.946343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.946463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.946496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.946604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.946637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.946822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.946856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.946981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.947013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.947151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.947185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.947355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.947387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.947574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.947606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.947852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.947886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.948132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.948166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.948363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.948395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.948527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.948560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.948686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.948718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.948916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.948948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.949127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.949161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.949283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.949316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.949493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.949526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.949722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.949755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.949867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.949901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.950004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.950037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.950096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.412 [2024-12-06 11:29:17.950222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.950255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.950428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.950461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.412 [2024-12-06 11:29:17.950573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-12-06 11:29:17.950606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.412 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.950725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.950758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.950858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.950890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.951016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.951049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.951195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.951228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.951340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.951372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.951566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.951599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.951773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.951807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.951994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.952027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.952340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.952373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.952644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.952676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.952786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.952819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.952997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.953029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d4c000b90 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.953271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.953332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.953465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.953500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.953697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.953731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.953858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.953891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.954080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.954115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.954302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.954337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.954556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.954589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.954960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.955001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.955210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.955245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.955358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.955400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.955573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.955606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.413 [2024-12-06 11:29:17.955799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.955833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:45.413 [2024-12-06 11:29:17.956045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.956094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.956197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.956231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.413 [2024-12-06 11:29:17.956429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.956463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.956584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.956619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.413 [2024-12-06 11:29:17.956808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.956843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.957042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.957105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.957233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.957266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.413 [2024-12-06 11:29:17.957379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-12-06 11:29:17.957413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.413 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.957608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.957641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc20590 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.957777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.957822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 A controller has encountered a failure and is being reset. 00:27:45.414 [2024-12-06 11:29:17.958050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.958106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.958363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.958397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.958528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.958562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.958663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.958696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.958816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.958850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.959035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.959081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.959213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.959246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.959496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.959528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.959698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.959732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.959915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.959948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.960086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.960120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.960305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.960338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.960461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.960503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.960642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.960675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.960881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.960913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.961131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.961165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.961355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.961388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.961500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.961533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.961713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.961746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.961955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.961988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.962176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.962210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.962456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.962489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.962603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.962636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.962746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.962780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.963046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.963090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.963226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.963259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.963457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.414 [2024-12-06 11:29:17.963490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.963668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.963702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:45.414 [2024-12-06 11:29:17.963894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.414 [2024-12-06 11:29:17.963928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.414 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.414 qpair failed and we were unable to recover it. 00:27:45.414 [2024-12-06 11:29:17.964041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.964088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.415 [2024-12-06 11:29:17.964280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.964314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.964515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.964549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.964758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.964791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.964961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.964995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.965201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.965236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.965355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.965388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.965576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.965610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.965792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.965826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.965930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.965964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.966151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.966187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.966397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.966431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.966540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.966574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.966819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.966853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.967037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.967081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.967327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.967360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.967607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.967640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.967765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.967798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.967933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.967968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.968110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.968144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.968319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.968353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.968532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.968570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.968676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.968709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.968886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.968919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.969101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.969135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.969336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.415 [2024-12-06 11:29:17.969370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.415 qpair failed and we were unable to recover it. 00:27:45.415 [2024-12-06 11:29:17.969567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.969600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.969798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.969832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.970013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.970047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.970179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.970212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.970388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.970422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.970552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.970586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.970800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.970834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.970963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.970997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.971221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.971256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.416 [2024-12-06 11:29:17.971374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.971408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.971527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.971560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.416 [2024-12-06 11:29:17.971668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.971702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.971829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.971862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.416 [2024-12-06 11:29:17.971979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.972012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.416 [2024-12-06 11:29:17.972214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.972249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.972376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.972410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.972596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.972630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.972769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.972803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.972982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.973016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.973151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.973185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.973316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.973356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.973569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.973603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.973711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.973745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.973991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.974025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.974178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.974213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.974435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.974469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.974645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.974678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.974797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.974832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.974953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.416 [2024-12-06 11:29:17.974986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d40000b90 with addr=10.0.0.2, port=4420 00:27:45.416 qpair failed and we were unable to recover it. 00:27:45.416 [2024-12-06 11:29:17.975037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.416 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.416 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:45.416 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.416 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.416 [2024-12-06 11:29:17.980731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.416 [2024-12-06 11:29:17.980847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.416 [2024-12-06 11:29:17.980889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:17.980912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:17.980934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:17.980996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.417 11:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1888374 00:27:45.417 [2024-12-06 11:29:17.990642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:17.990723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:17.990750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:17.990765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:17.990779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:17.990811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.000604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.000687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.000706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.000716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:18.000725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:18.000746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.010698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.010759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.010773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.010780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:18.010786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:18.010802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.020687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.020753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.020768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.020775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:18.020781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:18.020800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.030661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.030714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.030727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.030734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:18.030740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:18.030755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.040709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.040762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.040776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.040783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:18.040789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:18.040804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.050665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.050719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.050733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.050739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:18.050745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:18.050761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.060754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.060850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.060863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.060870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:18.060876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:18.060890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.070805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.070869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.070882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.070888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:18.070894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:18.070909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.080775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.080821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.080834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.080841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.417 [2024-12-06 11:29:18.080847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.417 [2024-12-06 11:29:18.080861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.417 qpair failed and we were unable to recover it. 00:27:45.417 [2024-12-06 11:29:18.090792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.417 [2024-12-06 11:29:18.090850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.417 [2024-12-06 11:29:18.090863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.417 [2024-12-06 11:29:18.090869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.090875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.090890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.100848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.100901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.100914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.100921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.100926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.100940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.110810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.110860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.110877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.110883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.110890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.110904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.120913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.120965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.120977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.120984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.120989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.121003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.130944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.130996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.131009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.131015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.131021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.131035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.140907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.140959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.140973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.140979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.140985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.140999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.151013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.151071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.151084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.151090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.151096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.151113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.160960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.161011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.161024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.161031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.161036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.161051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.171055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.171118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.171130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.171137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.171142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.171156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.181073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.181121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.181134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.181140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.181145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.181160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.191050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.191109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.191123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.191130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.191136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.191151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.201053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.418 [2024-12-06 11:29:18.201111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.418 [2024-12-06 11:29:18.201124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.418 [2024-12-06 11:29:18.201131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.418 [2024-12-06 11:29:18.201138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.418 [2024-12-06 11:29:18.201152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.418 qpair failed and we were unable to recover it. 00:27:45.418 [2024-12-06 11:29:18.211161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.211214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.211226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.211233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.211238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.211252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.221151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.221204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.221217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.221225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.221232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.221248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.231149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.231201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.231214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.231220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.231226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.231240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.241266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.241318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.241337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.241344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.241350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.241364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.251417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.251469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.251483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.251491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.251497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.251512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.261328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.261376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.261390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.261396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.261402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.261417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.271356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.271407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.271420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.271427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.271433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.271447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.281390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.281435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.281448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.281454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.281463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.281477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.291329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.291383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.291396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.291403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.291408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.291423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.301364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.301447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.301460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.301467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.301473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.301487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.311459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.311547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.311560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.419 [2024-12-06 11:29:18.311566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.419 [2024-12-06 11:29:18.311572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.419 [2024-12-06 11:29:18.311586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.419 qpair failed and we were unable to recover it. 00:27:45.419 [2024-12-06 11:29:18.321492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.419 [2024-12-06 11:29:18.321559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.419 [2024-12-06 11:29:18.321573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.420 [2024-12-06 11:29:18.321579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.420 [2024-12-06 11:29:18.321585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.420 [2024-12-06 11:29:18.321599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.420 qpair failed and we were unable to recover it. 00:27:45.681 [2024-12-06 11:29:18.331451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.681 [2024-12-06 11:29:18.331506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.681 [2024-12-06 11:29:18.331519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.681 [2024-12-06 11:29:18.331526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.681 [2024-12-06 11:29:18.331532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.681 [2024-12-06 11:29:18.331546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.681 qpair failed and we were unable to recover it. 00:27:45.681 [2024-12-06 11:29:18.341547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.681 [2024-12-06 11:29:18.341603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.681 [2024-12-06 11:29:18.341617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.681 [2024-12-06 11:29:18.341623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.681 [2024-12-06 11:29:18.341629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.681 [2024-12-06 11:29:18.341644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.681 qpair failed and we were unable to recover it. 00:27:45.681 [2024-12-06 11:29:18.351585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.681 [2024-12-06 11:29:18.351653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.681 [2024-12-06 11:29:18.351666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.681 [2024-12-06 11:29:18.351673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.681 [2024-12-06 11:29:18.351679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.681 [2024-12-06 11:29:18.351694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.681 qpair failed and we were unable to recover it. 00:27:45.681 [2024-12-06 11:29:18.361605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.681 [2024-12-06 11:29:18.361656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.681 [2024-12-06 11:29:18.361669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.681 [2024-12-06 11:29:18.361676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.681 [2024-12-06 11:29:18.361682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.681 [2024-12-06 11:29:18.361696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.681 qpair failed and we were unable to recover it. 00:27:45.681 [2024-12-06 11:29:18.371561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.681 [2024-12-06 11:29:18.371614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.681 [2024-12-06 11:29:18.371630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.681 [2024-12-06 11:29:18.371637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.681 [2024-12-06 11:29:18.371642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.681 [2024-12-06 11:29:18.371656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.681 qpair failed and we were unable to recover it. 00:27:45.681 [2024-12-06 11:29:18.381632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.681 [2024-12-06 11:29:18.381687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.381700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.381706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.381712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.381726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.391678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.391733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.391745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.391752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.391758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.391772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.401623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.401674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.401687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.401694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.401699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.401713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.411727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.411780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.411793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.411802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.411808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.411823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.421765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.421816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.421832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.421839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.421844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.421860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.431765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.431814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.431828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.431834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.431840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.431854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.441807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.441899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.441914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.441920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.441926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.441941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.451846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.451900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.451913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.451920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.451925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.451940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.461860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.461909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.461922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.461929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.461936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.461951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.471898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.471951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.471965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.471972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.471978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.471992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.481919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.481970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.682 [2024-12-06 11:29:18.481984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.682 [2024-12-06 11:29:18.481991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.682 [2024-12-06 11:29:18.481997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.682 [2024-12-06 11:29:18.482011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.682 qpair failed and we were unable to recover it. 00:27:45.682 [2024-12-06 11:29:18.491954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.682 [2024-12-06 11:29:18.492006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.492018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.492025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.492031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.492045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.501988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.502043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.502056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.502067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.502073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.502088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.512005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.512064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.512078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.512085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.512091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.512105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.522043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.522097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.522110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.522116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.522122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.522137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.532078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.532148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.532160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.532167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.532173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.532188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.542099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.542152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.542166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.542176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.542181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.542195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.552169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.552227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.552240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.552246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.552252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.552267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.562118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.562182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.562195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.562202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.562208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.562223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.572225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.572281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.572294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.572300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.572306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.572320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.582209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.582260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.582273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.582279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.582285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.582303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.592237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.592286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.592299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.592305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.592311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.592326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.683 qpair failed and we were unable to recover it. 00:27:45.683 [2024-12-06 11:29:18.602250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.683 [2024-12-06 11:29:18.602296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.683 [2024-12-06 11:29:18.602309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.683 [2024-12-06 11:29:18.602315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.683 [2024-12-06 11:29:18.602321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.683 [2024-12-06 11:29:18.602335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.684 qpair failed and we were unable to recover it. 00:27:45.684 [2024-12-06 11:29:18.612302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.684 [2024-12-06 11:29:18.612355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.684 [2024-12-06 11:29:18.612367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.684 [2024-12-06 11:29:18.612373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.684 [2024-12-06 11:29:18.612379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.684 [2024-12-06 11:29:18.612394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.684 qpair failed and we were unable to recover it. 00:27:45.944 [2024-12-06 11:29:18.622323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.944 [2024-12-06 11:29:18.622374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.944 [2024-12-06 11:29:18.622386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.944 [2024-12-06 11:29:18.622393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.944 [2024-12-06 11:29:18.622398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.944 [2024-12-06 11:29:18.622412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.944 qpair failed and we were unable to recover it. 00:27:45.944 [2024-12-06 11:29:18.632406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.944 [2024-12-06 11:29:18.632503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.944 [2024-12-06 11:29:18.632515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.944 [2024-12-06 11:29:18.632521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.944 [2024-12-06 11:29:18.632527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.944 [2024-12-06 11:29:18.632541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.944 qpair failed and we were unable to recover it. 00:27:45.944 [2024-12-06 11:29:18.642399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.944 [2024-12-06 11:29:18.642452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.944 [2024-12-06 11:29:18.642465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.944 [2024-12-06 11:29:18.642472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.944 [2024-12-06 11:29:18.642478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.944 [2024-12-06 11:29:18.642493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.944 qpair failed and we were unable to recover it. 00:27:45.944 [2024-12-06 11:29:18.652415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.944 [2024-12-06 11:29:18.652467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.944 [2024-12-06 11:29:18.652480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.944 [2024-12-06 11:29:18.652486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.944 [2024-12-06 11:29:18.652492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.944 [2024-12-06 11:29:18.652506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.944 qpair failed and we were unable to recover it. 00:27:45.944 [2024-12-06 11:29:18.662427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.944 [2024-12-06 11:29:18.662480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.662493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.662500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.662505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.662520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.672457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.672539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.672554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.672561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.672566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.672580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.682477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.682528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.682540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.682547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.682552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.682568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.692512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.692565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.692577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.692583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.692589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.692604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.702541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.702596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.702608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.702615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.702620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.702635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.712574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.712626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.712639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.712646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.712651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.712669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.722581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.722632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.722645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.722651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.722657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.722672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.732602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.732656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.732669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.732675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.732681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.732695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.742653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.742708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.742722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.742728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.742735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.742749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.752678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.752743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.752756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.752763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.752768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.752783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.762697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.762747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.945 [2024-12-06 11:29:18.762761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.945 [2024-12-06 11:29:18.762767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.945 [2024-12-06 11:29:18.762773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.945 [2024-12-06 11:29:18.762787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.945 qpair failed and we were unable to recover it. 00:27:45.945 [2024-12-06 11:29:18.772732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.945 [2024-12-06 11:29:18.772782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.772794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.772801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.772806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.772820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.782836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.782896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.782908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.782916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.782922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.782937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.792829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.792880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.792893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.792900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.792906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.792920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.802860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.802909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.802924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.802931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.802937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.802951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.812884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.812959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.812972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.812979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.812985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.812999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.822902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.822959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.822973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.822979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.822985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.822999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.832907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.832954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.832967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.832974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.832980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.832994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.842931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.842982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.842995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.843002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.843010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.843025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.852968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.853020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.853033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.853039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.853045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.853064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.862989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.863043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.863056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.863069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.863075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.863090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:45.946 [2024-12-06 11:29:18.873013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.946 [2024-12-06 11:29:18.873065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.946 [2024-12-06 11:29:18.873078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.946 [2024-12-06 11:29:18.873085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.946 [2024-12-06 11:29:18.873091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:45.946 [2024-12-06 11:29:18.873106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.946 qpair failed and we were unable to recover it. 00:27:46.207 [2024-12-06 11:29:18.883042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.207 [2024-12-06 11:29:18.883096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.207 [2024-12-06 11:29:18.883110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.207 [2024-12-06 11:29:18.883116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.207 [2024-12-06 11:29:18.883123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.207 [2024-12-06 11:29:18.883137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.207 qpair failed and we were unable to recover it. 00:27:46.207 [2024-12-06 11:29:18.893088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.207 [2024-12-06 11:29:18.893141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.207 [2024-12-06 11:29:18.893154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.207 [2024-12-06 11:29:18.893160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.207 [2024-12-06 11:29:18.893166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.207 [2024-12-06 11:29:18.893181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.207 qpair failed and we were unable to recover it. 00:27:46.207 [2024-12-06 11:29:18.903106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.207 [2024-12-06 11:29:18.903161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.207 [2024-12-06 11:29:18.903174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.207 [2024-12-06 11:29:18.903181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.207 [2024-12-06 11:29:18.903187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.207 [2024-12-06 11:29:18.903201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.207 qpair failed and we were unable to recover it. 00:27:46.207 [2024-12-06 11:29:18.913128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.207 [2024-12-06 11:29:18.913180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.207 [2024-12-06 11:29:18.913193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.207 [2024-12-06 11:29:18.913200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.207 [2024-12-06 11:29:18.913206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.207 [2024-12-06 11:29:18.913220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.207 qpair failed and we were unable to recover it. 00:27:46.207 [2024-12-06 11:29:18.923156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.207 [2024-12-06 11:29:18.923208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.207 [2024-12-06 11:29:18.923221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.207 [2024-12-06 11:29:18.923228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.207 [2024-12-06 11:29:18.923233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.207 [2024-12-06 11:29:18.923248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.207 qpair failed and we were unable to recover it. 00:27:46.207 [2024-12-06 11:29:18.933191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.207 [2024-12-06 11:29:18.933257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.207 [2024-12-06 11:29:18.933273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.207 [2024-12-06 11:29:18.933280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.207 [2024-12-06 11:29:18.933286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.207 [2024-12-06 11:29:18.933301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.207 qpair failed and we were unable to recover it. 00:27:46.207 [2024-12-06 11:29:18.943245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.207 [2024-12-06 11:29:18.943299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.207 [2024-12-06 11:29:18.943312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.207 [2024-12-06 11:29:18.943318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.207 [2024-12-06 11:29:18.943324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.207 [2024-12-06 11:29:18.943339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.207 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:18.953248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:18.953335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:18.953347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:18.953354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:18.953359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:18.953373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:18.963266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:18.963343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:18.963356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:18.963363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:18.963368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:18.963382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:18.973304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:18.973358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:18.973371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:18.973382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:18.973388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:18.973403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:18.983333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:18.983386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:18.983399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:18.983406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:18.983412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:18.983425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:18.993354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:18.993402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:18.993415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:18.993422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:18.993427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:18.993441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:19.003377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:19.003474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:19.003487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:19.003494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:19.003499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:19.003514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:19.013415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:19.013488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:19.013501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:19.013508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:19.013514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:19.013529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:19.023456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:19.023538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:19.023551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:19.023557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:19.023563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:19.023576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:19.033453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:19.033504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:19.033516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:19.033523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:19.033528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:19.033543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:19.043476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:19.043528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:19.043542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:19.043548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:19.043554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:19.043568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:19.053512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.208 [2024-12-06 11:29:19.053563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.208 [2024-12-06 11:29:19.053576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.208 [2024-12-06 11:29:19.053582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.208 [2024-12-06 11:29:19.053588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.208 [2024-12-06 11:29:19.053602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.208 qpair failed and we were unable to recover it. 00:27:46.208 [2024-12-06 11:29:19.063537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.209 [2024-12-06 11:29:19.063593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.209 [2024-12-06 11:29:19.063606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.209 [2024-12-06 11:29:19.063613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.209 [2024-12-06 11:29:19.063619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.209 [2024-12-06 11:29:19.063633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.209 qpair failed and we were unable to recover it. 00:27:46.209 [2024-12-06 11:29:19.073537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.209 [2024-12-06 11:29:19.073593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.209 [2024-12-06 11:29:19.073606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.209 [2024-12-06 11:29:19.073613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.209 [2024-12-06 11:29:19.073619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.209 [2024-12-06 11:29:19.073633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.209 qpair failed and we were unable to recover it. 00:27:46.209 [2024-12-06 11:29:19.083619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.209 [2024-12-06 11:29:19.083716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.209 [2024-12-06 11:29:19.083728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.209 [2024-12-06 11:29:19.083735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.209 [2024-12-06 11:29:19.083741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.209 [2024-12-06 11:29:19.083754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.209 qpair failed and we were unable to recover it. 00:27:46.209 [2024-12-06 11:29:19.093621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.209 [2024-12-06 11:29:19.093672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.209 [2024-12-06 11:29:19.093685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.209 [2024-12-06 11:29:19.093691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.209 [2024-12-06 11:29:19.093697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.209 [2024-12-06 11:29:19.093711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.209 qpair failed and we were unable to recover it. 00:27:46.209 [2024-12-06 11:29:19.103662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.209 [2024-12-06 11:29:19.103716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.209 [2024-12-06 11:29:19.103729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.209 [2024-12-06 11:29:19.103738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.209 [2024-12-06 11:29:19.103744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.209 [2024-12-06 11:29:19.103759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.209 qpair failed and we were unable to recover it. 00:27:46.209 [2024-12-06 11:29:19.113677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.209 [2024-12-06 11:29:19.113728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.209 [2024-12-06 11:29:19.113741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.209 [2024-12-06 11:29:19.113748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.209 [2024-12-06 11:29:19.113753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.209 [2024-12-06 11:29:19.113768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.209 qpair failed and we were unable to recover it. 00:27:46.209 [2024-12-06 11:29:19.123703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.209 [2024-12-06 11:29:19.123751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.209 [2024-12-06 11:29:19.123764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.209 [2024-12-06 11:29:19.123771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.209 [2024-12-06 11:29:19.123777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.209 [2024-12-06 11:29:19.123790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.209 qpair failed and we were unable to recover it. 00:27:46.209 [2024-12-06 11:29:19.133743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.209 [2024-12-06 11:29:19.133793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.209 [2024-12-06 11:29:19.133806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.209 [2024-12-06 11:29:19.133812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.209 [2024-12-06 11:29:19.133818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.209 [2024-12-06 11:29:19.133832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.209 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-06 11:29:19.143738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.470 [2024-12-06 11:29:19.143794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.470 [2024-12-06 11:29:19.143807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.470 [2024-12-06 11:29:19.143813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.470 [2024-12-06 11:29:19.143820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.470 [2024-12-06 11:29:19.143837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-06 11:29:19.153776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.470 [2024-12-06 11:29:19.153829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.470 [2024-12-06 11:29:19.153842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.470 [2024-12-06 11:29:19.153849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.470 [2024-12-06 11:29:19.153855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.470 [2024-12-06 11:29:19.153869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-06 11:29:19.163813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.470 [2024-12-06 11:29:19.163863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.470 [2024-12-06 11:29:19.163876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.470 [2024-12-06 11:29:19.163882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.470 [2024-12-06 11:29:19.163888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.470 [2024-12-06 11:29:19.163902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-06 11:29:19.173857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.470 [2024-12-06 11:29:19.173935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.470 [2024-12-06 11:29:19.173948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.470 [2024-12-06 11:29:19.173955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.470 [2024-12-06 11:29:19.173961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.470 [2024-12-06 11:29:19.173976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-06 11:29:19.183879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.470 [2024-12-06 11:29:19.183957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.470 [2024-12-06 11:29:19.183971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.470 [2024-12-06 11:29:19.183977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.470 [2024-12-06 11:29:19.183983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.470 [2024-12-06 11:29:19.183997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.470 [2024-12-06 11:29:19.193898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.470 [2024-12-06 11:29:19.193951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.470 [2024-12-06 11:29:19.193964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.470 [2024-12-06 11:29:19.193971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.470 [2024-12-06 11:29:19.193976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.470 [2024-12-06 11:29:19.193991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.470 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.203911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.203963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.203976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.203983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.203988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.204003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.213953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.214012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.214024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.214030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.214036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.214050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.223974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.224026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.224039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.224045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.224051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.224069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.234023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.234079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.234095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.234101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.234107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.234120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.244039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.244091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.244105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.244111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.244117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.244131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.254075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.254129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.254142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.254148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.254154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.254168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.264090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.264142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.264155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.264161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.264168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.264182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.274121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.274170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.274183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.274190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.274198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.274214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.284150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.284201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.284214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.284220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.284226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.284241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.294178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.294233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.294245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.294252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.294258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.294272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.304248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.471 [2024-12-06 11:29:19.304329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.471 [2024-12-06 11:29:19.304342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.471 [2024-12-06 11:29:19.304348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.471 [2024-12-06 11:29:19.304354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.471 [2024-12-06 11:29:19.304368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.471 qpair failed and we were unable to recover it. 00:27:46.471 [2024-12-06 11:29:19.314286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.314372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.314385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.314392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.314398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.314411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-06 11:29:19.324271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.324321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.324333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.324340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.324345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.324360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-06 11:29:19.334297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.334351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.334365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.334372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.334377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.334391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-06 11:29:19.344326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.344379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.344392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.344399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.344405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.344419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-06 11:29:19.354345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.354402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.354415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.354421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.354427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.354442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-06 11:29:19.364423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.364474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.364490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.364497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.364503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.364517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-06 11:29:19.374433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.374484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.374497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.374504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.374509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.374524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-06 11:29:19.384436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.384523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.384536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.384544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.384549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.384563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-06 11:29:19.394460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.394511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.394523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.394530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.394536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.394551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.472 [2024-12-06 11:29:19.404459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.472 [2024-12-06 11:29:19.404506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.472 [2024-12-06 11:29:19.404519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.472 [2024-12-06 11:29:19.404525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.472 [2024-12-06 11:29:19.404534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.472 [2024-12-06 11:29:19.404548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.472 qpair failed and we were unable to recover it. 00:27:46.733 [2024-12-06 11:29:19.414583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.733 [2024-12-06 11:29:19.414654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.733 [2024-12-06 11:29:19.414666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.733 [2024-12-06 11:29:19.414672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.733 [2024-12-06 11:29:19.414678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.733 [2024-12-06 11:29:19.414693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-12-06 11:29:19.424552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.733 [2024-12-06 11:29:19.424622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.733 [2024-12-06 11:29:19.424634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.733 [2024-12-06 11:29:19.424642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.733 [2024-12-06 11:29:19.424647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.733 [2024-12-06 11:29:19.424661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-12-06 11:29:19.434570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.733 [2024-12-06 11:29:19.434624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.733 [2024-12-06 11:29:19.434636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.733 [2024-12-06 11:29:19.434643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.733 [2024-12-06 11:29:19.434650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.733 [2024-12-06 11:29:19.434664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-12-06 11:29:19.444590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.733 [2024-12-06 11:29:19.444641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.733 [2024-12-06 11:29:19.444655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.733 [2024-12-06 11:29:19.444661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.733 [2024-12-06 11:29:19.444667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.733 [2024-12-06 11:29:19.444681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-12-06 11:29:19.454596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.454664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.454677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.454683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.454689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.454703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.464648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.464714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.464727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.464733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.464739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.464753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.474630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.474697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.474710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.474717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.474722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.474736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.484694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.484749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.484762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.484769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.484775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.484789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.494739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.494838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.494855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.494862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.494868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.494883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.504724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.504777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.504790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.504796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.504802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.504816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.514738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.514789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.514802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.514808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.514814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.514829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.524749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.524838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.524851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.524858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.524863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.524878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.534761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.534822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.534834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.534843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.534850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.534864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.544859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.544913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.544926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.544932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.544938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.544953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.554892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.734 [2024-12-06 11:29:19.554947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.734 [2024-12-06 11:29:19.554960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.734 [2024-12-06 11:29:19.554966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.734 [2024-12-06 11:29:19.554972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.734 [2024-12-06 11:29:19.554986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-12-06 11:29:19.564848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.564900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.564913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.564920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.564926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.564940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.574890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.574940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.574952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.574959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.574964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.574978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.584989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.585039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.585052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.585064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.585070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.585085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.594928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.594982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.594995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.595001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.595007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.595022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.604996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.605087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.605101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.605107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.605113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.605127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.614993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.615045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.615063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.615070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.615076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.615090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.625102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.625157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.625170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.625177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.625183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.625197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.635065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.635127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.635139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.635147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.635152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.635166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.645102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.645198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.645212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.645219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.645225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.645239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.655168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.655223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.655236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.655242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.655248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.655263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-12-06 11:29:19.665142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.735 [2024-12-06 11:29:19.665243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.735 [2024-12-06 11:29:19.665256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.735 [2024-12-06 11:29:19.665266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.735 [2024-12-06 11:29:19.665271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.735 [2024-12-06 11:29:19.665286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.996 [2024-12-06 11:29:19.675176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.996 [2024-12-06 11:29:19.675231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.996 [2024-12-06 11:29:19.675247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.996 [2024-12-06 11:29:19.675253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.996 [2024-12-06 11:29:19.675259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.996 [2024-12-06 11:29:19.675275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.996 qpair failed and we were unable to recover it. 00:27:46.996 [2024-12-06 11:29:19.685221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.996 [2024-12-06 11:29:19.685310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.996 [2024-12-06 11:29:19.685324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.685330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.685336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.685351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.695369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.695446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.695460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.695467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.695472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.695487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.705331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.705388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.705400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.705407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.705413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.705432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.715278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.715334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.715347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.715354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.715360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.715374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.725372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.725423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.725436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.725442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.725448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.725463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.735390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.735445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.735457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.735463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.735469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.735483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.745370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.745461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.745475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.745481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.745487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.745501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.755404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.755496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.755509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.755516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.755522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.755536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.765491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.765536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.765549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.765556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.765562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.765575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.775450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.775501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.775514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.775520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.775526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.775541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.785545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.785597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.785610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.785616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.997 [2024-12-06 11:29:19.785622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.997 [2024-12-06 11:29:19.785636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.997 qpair failed and we were unable to recover it. 00:27:46.997 [2024-12-06 11:29:19.795606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.997 [2024-12-06 11:29:19.795658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.997 [2024-12-06 11:29:19.795674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.997 [2024-12-06 11:29:19.795680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.795686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.795701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.805596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.805646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.805658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.805664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.805670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.805685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.815631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.815684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.815697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.815703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.815709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.815723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.825665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.825716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.825728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.825735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.825741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.825755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.835628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.835676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.835688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.835695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.835704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.835719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.845723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.845771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.845784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.845791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.845796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.845811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.855666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.855717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.855731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.855738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.855744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.855758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.865692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.865771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.865784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.865791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.865797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.865811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.875798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.875843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.875856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.875862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.875868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.875882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.885808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.885856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.885869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.885876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.885882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.885896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.895879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.895933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.895946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.895953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.998 [2024-12-06 11:29:19.895959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.998 [2024-12-06 11:29:19.895974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.998 qpair failed and we were unable to recover it. 00:27:46.998 [2024-12-06 11:29:19.905877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.998 [2024-12-06 11:29:19.905976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.998 [2024-12-06 11:29:19.905989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.998 [2024-12-06 11:29:19.905996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.999 [2024-12-06 11:29:19.906002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.999 [2024-12-06 11:29:19.906016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.999 qpair failed and we were unable to recover it. 00:27:46.999 [2024-12-06 11:29:19.915912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.999 [2024-12-06 11:29:19.915967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.999 [2024-12-06 11:29:19.915980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.999 [2024-12-06 11:29:19.915986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.999 [2024-12-06 11:29:19.915992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.999 [2024-12-06 11:29:19.916007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.999 qpair failed and we were unable to recover it. 00:27:46.999 [2024-12-06 11:29:19.925885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.999 [2024-12-06 11:29:19.925982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.999 [2024-12-06 11:29:19.925999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.999 [2024-12-06 11:29:19.926006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.999 [2024-12-06 11:29:19.926012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:46.999 [2024-12-06 11:29:19.926026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.999 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:19.936028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:19.936131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:19.936144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:19.936150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:19.936156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:19.936170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:19.945924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:19.945977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:19.945990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:19.945997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:19.946002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:19.946017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:19.956023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:19.956091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:19.956114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:19.956121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:19.956126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:19.956145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:19.966040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:19.966090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:19.966104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:19.966110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:19.966119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:19.966134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:19.976091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:19.976146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:19.976159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:19.976165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:19.976172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:19.976187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:19.986045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:19.986105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:19.986118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:19.986125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:19.986131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:19.986145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:19.996140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:19.996190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:19.996203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:19.996210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:19.996216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:19.996231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:20.006233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:20.006328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:20.006341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:20.006348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:20.006354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:20.006369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:20.016282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:20.016350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:20.016363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:20.016370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:20.016376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:20.016391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:20.026251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:20.026311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:20.026324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:20.026331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:20.026337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.260 [2024-12-06 11:29:20.026351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.260 qpair failed and we were unable to recover it. 00:27:47.260 [2024-12-06 11:29:20.036265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.260 [2024-12-06 11:29:20.036323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.260 [2024-12-06 11:29:20.036336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.260 [2024-12-06 11:29:20.036342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.260 [2024-12-06 11:29:20.036349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.036366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.046321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.046377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.046389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.046396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.046402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.046417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.056326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.056381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.056397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.056404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.056410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.056424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.066362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.066413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.066426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.066432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.066438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.066453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.076337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.076396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.076409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.076416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.076422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.076436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.086315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.086403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.086415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.086422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.086428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.086441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.096439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.096533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.096546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.096555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.096561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.096575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.106463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.106529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.106543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.106549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.106555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.106569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.116461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.116511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.116524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.116531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.116536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.116551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.126502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.126585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.126598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.126605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.126611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.126624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.136519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.136572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.136587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.136593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.136599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.136619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.146566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.146650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.146663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.261 [2024-12-06 11:29:20.146670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.261 [2024-12-06 11:29:20.146676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.261 [2024-12-06 11:29:20.146690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.261 qpair failed and we were unable to recover it. 00:27:47.261 [2024-12-06 11:29:20.156572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.261 [2024-12-06 11:29:20.156622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.261 [2024-12-06 11:29:20.156635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.262 [2024-12-06 11:29:20.156641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.262 [2024-12-06 11:29:20.156647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.262 [2024-12-06 11:29:20.156662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.262 qpair failed and we were unable to recover it. 00:27:47.262 [2024-12-06 11:29:20.166601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.262 [2024-12-06 11:29:20.166647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.262 [2024-12-06 11:29:20.166661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.262 [2024-12-06 11:29:20.166668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.262 [2024-12-06 11:29:20.166674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.262 [2024-12-06 11:29:20.166688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.262 qpair failed and we were unable to recover it. 00:27:47.262 [2024-12-06 11:29:20.176644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.262 [2024-12-06 11:29:20.176695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.262 [2024-12-06 11:29:20.176708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.262 [2024-12-06 11:29:20.176715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.262 [2024-12-06 11:29:20.176721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.262 [2024-12-06 11:29:20.176735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.262 qpair failed and we were unable to recover it. 00:27:47.262 [2024-12-06 11:29:20.186628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.262 [2024-12-06 11:29:20.186712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.262 [2024-12-06 11:29:20.186725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.262 [2024-12-06 11:29:20.186731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.262 [2024-12-06 11:29:20.186737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.262 [2024-12-06 11:29:20.186750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.262 qpair failed and we were unable to recover it. 00:27:47.522 [2024-12-06 11:29:20.196684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-12-06 11:29:20.196735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-12-06 11:29:20.196748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-12-06 11:29:20.196754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-12-06 11:29:20.196760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.522 [2024-12-06 11:29:20.196774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-12-06 11:29:20.206714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-12-06 11:29:20.206768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-12-06 11:29:20.206781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-12-06 11:29:20.206787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-12-06 11:29:20.206793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.522 [2024-12-06 11:29:20.206808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-12-06 11:29:20.216774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.216837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.216849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.216856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.216862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.216876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.226786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.226873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.226886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.226896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.226901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.226915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.236804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.236885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.236899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.236906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.236911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.236925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.246752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.246848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.246861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.246868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.246875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.246889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.256856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.256931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.256944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.256951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.256956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.256971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.266812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.266881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.266894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.266901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.266907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.266925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.276925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.276976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.276989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.276995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.277001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.277015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.286855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.286908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.286921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.286927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.286933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.286947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.296978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.297069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.297082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.297089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.297094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.297108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.306987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.307045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.307063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.307070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.307076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.307090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.317019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.523 [2024-12-06 11:29:20.317076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.523 [2024-12-06 11:29:20.317088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.523 [2024-12-06 11:29:20.317095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.523 [2024-12-06 11:29:20.317102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.523 [2024-12-06 11:29:20.317116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.523 qpair failed and we were unable to recover it. 00:27:47.523 [2024-12-06 11:29:20.327043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.327097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.327110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.327116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.327122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.327136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.337077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.337153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.337167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.337174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.337180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.337194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.347120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.347171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.347184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.347190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.347196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.347210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.357123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.357175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.357190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.357198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.357204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.357218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.367155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.367203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.367216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.367223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.367228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.367243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.377182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.377236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.377249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.377255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.377261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.377276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.387216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.387267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.387279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.387286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.387291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.387306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.397235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.397288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.397301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.397308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.397316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.397331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.407291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.407349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.407361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.407368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.407374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.407388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.417306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.417357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.417370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.417376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.417382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.417396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.427319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.427368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.427381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.524 [2024-12-06 11:29:20.427387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.524 [2024-12-06 11:29:20.427392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.524 [2024-12-06 11:29:20.427407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.524 qpair failed and we were unable to recover it. 00:27:47.524 [2024-12-06 11:29:20.437357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.524 [2024-12-06 11:29:20.437406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.524 [2024-12-06 11:29:20.437419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.525 [2024-12-06 11:29:20.437426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.525 [2024-12-06 11:29:20.437431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.525 [2024-12-06 11:29:20.437445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.525 qpair failed and we were unable to recover it. 00:27:47.525 [2024-12-06 11:29:20.447422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.525 [2024-12-06 11:29:20.447472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.525 [2024-12-06 11:29:20.447485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.525 [2024-12-06 11:29:20.447491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.525 [2024-12-06 11:29:20.447497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.525 [2024-12-06 11:29:20.447511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.525 qpair failed and we were unable to recover it. 00:27:47.525 [2024-12-06 11:29:20.457425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.525 [2024-12-06 11:29:20.457476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.525 [2024-12-06 11:29:20.457489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.525 [2024-12-06 11:29:20.457495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.525 [2024-12-06 11:29:20.457500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.525 [2024-12-06 11:29:20.457515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.525 qpair failed and we were unable to recover it. 00:27:47.786 [2024-12-06 11:29:20.467447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.786 [2024-12-06 11:29:20.467519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.786 [2024-12-06 11:29:20.467532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.786 [2024-12-06 11:29:20.467539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.786 [2024-12-06 11:29:20.467545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.786 [2024-12-06 11:29:20.467559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.786 qpair failed and we were unable to recover it. 00:27:47.786 [2024-12-06 11:29:20.477473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.786 [2024-12-06 11:29:20.477522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.786 [2024-12-06 11:29:20.477535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.786 [2024-12-06 11:29:20.477541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.786 [2024-12-06 11:29:20.477547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.786 [2024-12-06 11:29:20.477561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.786 qpair failed and we were unable to recover it. 00:27:47.786 [2024-12-06 11:29:20.487491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.786 [2024-12-06 11:29:20.487541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.786 [2024-12-06 11:29:20.487556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.786 [2024-12-06 11:29:20.487563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.786 [2024-12-06 11:29:20.487568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.786 [2024-12-06 11:29:20.487583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.786 qpair failed and we were unable to recover it. 00:27:47.786 [2024-12-06 11:29:20.497529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.786 [2024-12-06 11:29:20.497578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.786 [2024-12-06 11:29:20.497591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.786 [2024-12-06 11:29:20.497597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.786 [2024-12-06 11:29:20.497602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.786 [2024-12-06 11:29:20.497617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.786 qpair failed and we were unable to recover it. 00:27:47.786 [2024-12-06 11:29:20.507539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.786 [2024-12-06 11:29:20.507590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.786 [2024-12-06 11:29:20.507602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.786 [2024-12-06 11:29:20.507609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.786 [2024-12-06 11:29:20.507615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.786 [2024-12-06 11:29:20.507628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.786 qpair failed and we were unable to recover it. 00:27:47.786 [2024-12-06 11:29:20.517570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.786 [2024-12-06 11:29:20.517623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.786 [2024-12-06 11:29:20.517635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.786 [2024-12-06 11:29:20.517642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.786 [2024-12-06 11:29:20.517647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.786 [2024-12-06 11:29:20.517662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.786 qpair failed and we were unable to recover it. 00:27:47.786 [2024-12-06 11:29:20.527613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.527661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.527673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.527679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.527687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.527701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.537682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.537732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.537746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.537752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.537758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.537773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.547671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.547721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.547733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.547740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.547746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.547760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.557662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.557712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.557725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.557731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.557737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.557751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.567736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.567821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.567834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.567841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.567846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.567860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.577758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.577822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.577836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.577843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.577848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.577862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.587786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.587832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.587845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.587852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.587858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.587871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.597852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.597903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.597915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.597922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.597927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.597942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.607833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.607899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.607912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.607918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.607924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.607938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.617871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.617922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.787 [2024-12-06 11:29:20.617937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.787 [2024-12-06 11:29:20.617944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.787 [2024-12-06 11:29:20.617949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.787 [2024-12-06 11:29:20.617963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.787 qpair failed and we were unable to recover it. 00:27:47.787 [2024-12-06 11:29:20.627889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.787 [2024-12-06 11:29:20.627937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.627949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.627956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.627961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.627975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:47.788 [2024-12-06 11:29:20.637918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.788 [2024-12-06 11:29:20.637972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.637985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.637992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.637999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.638014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:47.788 [2024-12-06 11:29:20.647945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.788 [2024-12-06 11:29:20.647993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.648006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.648012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.648018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.648032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:47.788 [2024-12-06 11:29:20.657980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.788 [2024-12-06 11:29:20.658035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.658047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.658057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.658073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.658087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:47.788 [2024-12-06 11:29:20.668004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.788 [2024-12-06 11:29:20.668061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.668074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.668081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.668087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.668101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:47.788 [2024-12-06 11:29:20.677971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.788 [2024-12-06 11:29:20.678074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.678089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.678095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.678101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.678116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:47.788 [2024-12-06 11:29:20.688050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.788 [2024-12-06 11:29:20.688102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.688115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.688122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.688127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.688141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:47.788 [2024-12-06 11:29:20.698094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.788 [2024-12-06 11:29:20.698146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.698159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.698165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.698170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.698188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:47.788 [2024-12-06 11:29:20.708119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.788 [2024-12-06 11:29:20.708175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.708187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.708194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.708199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.708213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:47.788 [2024-12-06 11:29:20.718151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.788 [2024-12-06 11:29:20.718203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.788 [2024-12-06 11:29:20.718216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.788 [2024-12-06 11:29:20.718222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.788 [2024-12-06 11:29:20.718227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:47.788 [2024-12-06 11:29:20.718243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.788 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.728171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.728218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.728231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.728237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.049 [2024-12-06 11:29:20.728243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.049 [2024-12-06 11:29:20.728257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.049 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.738207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.738262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.738276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.738283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.049 [2024-12-06 11:29:20.738288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.049 [2024-12-06 11:29:20.738302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.049 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.748263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.748349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.748362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.748368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.049 [2024-12-06 11:29:20.748374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.049 [2024-12-06 11:29:20.748388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.049 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.758270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.758339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.758352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.758359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.049 [2024-12-06 11:29:20.758365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.049 [2024-12-06 11:29:20.758379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.049 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.768297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.768349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.768362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.768369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.049 [2024-12-06 11:29:20.768375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.049 [2024-12-06 11:29:20.768389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.049 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.778367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.778468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.778481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.778488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.049 [2024-12-06 11:29:20.778494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.049 [2024-12-06 11:29:20.778508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.049 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.788351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.788404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.788417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.788426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.049 [2024-12-06 11:29:20.788432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.049 [2024-12-06 11:29:20.788446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.049 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.798486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.798545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.798557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.798563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.049 [2024-12-06 11:29:20.798569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.049 [2024-12-06 11:29:20.798583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.049 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.808427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.808500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.808513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.808520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.049 [2024-12-06 11:29:20.808526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.049 [2024-12-06 11:29:20.808540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.049 qpair failed and we were unable to recover it. 00:27:48.049 [2024-12-06 11:29:20.818461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.049 [2024-12-06 11:29:20.818513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.049 [2024-12-06 11:29:20.818526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.049 [2024-12-06 11:29:20.818532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.818538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.818553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.828488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.828540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.828553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.828560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.828566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.828583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.838406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.838458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.838471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.838478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.838484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.838498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.848416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.848465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.848478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.848484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.848490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.848504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.858547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.858603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.858616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.858622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.858628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.858643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.868587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.868661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.868674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.868680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.868686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.868700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.878593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.878641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.878654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.878660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.878666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.878680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.888654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.888711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.888723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.888730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.888736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.888750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.898661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.898718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.898731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.898737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.898743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.898757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.908674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.908726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.908739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.908746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.908752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.908766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.918692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.050 [2024-12-06 11:29:20.918745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.050 [2024-12-06 11:29:20.918760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.050 [2024-12-06 11:29:20.918767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.050 [2024-12-06 11:29:20.918773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.050 [2024-12-06 11:29:20.918787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.050 qpair failed and we were unable to recover it. 00:27:48.050 [2024-12-06 11:29:20.928721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.051 [2024-12-06 11:29:20.928773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.051 [2024-12-06 11:29:20.928789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.051 [2024-12-06 11:29:20.928796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.051 [2024-12-06 11:29:20.928802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.051 [2024-12-06 11:29:20.928818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.051 qpair failed and we were unable to recover it. 00:27:48.051 [2024-12-06 11:29:20.938760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.051 [2024-12-06 11:29:20.938815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.051 [2024-12-06 11:29:20.938829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.051 [2024-12-06 11:29:20.938836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.051 [2024-12-06 11:29:20.938842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.051 [2024-12-06 11:29:20.938856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.051 qpair failed and we were unable to recover it. 00:27:48.051 [2024-12-06 11:29:20.948781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.051 [2024-12-06 11:29:20.948832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.051 [2024-12-06 11:29:20.948845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.051 [2024-12-06 11:29:20.948851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.051 [2024-12-06 11:29:20.948857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.051 [2024-12-06 11:29:20.948872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.051 qpair failed and we were unable to recover it. 00:27:48.051 [2024-12-06 11:29:20.958811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.051 [2024-12-06 11:29:20.958880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.051 [2024-12-06 11:29:20.958894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.051 [2024-12-06 11:29:20.958900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.051 [2024-12-06 11:29:20.958910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.051 [2024-12-06 11:29:20.958924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.051 qpair failed and we were unable to recover it. 00:27:48.051 [2024-12-06 11:29:20.968862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.051 [2024-12-06 11:29:20.968916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.051 [2024-12-06 11:29:20.968929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.051 [2024-12-06 11:29:20.968936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.051 [2024-12-06 11:29:20.968942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.051 [2024-12-06 11:29:20.968956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.051 qpair failed and we were unable to recover it. 00:27:48.051 [2024-12-06 11:29:20.978899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.051 [2024-12-06 11:29:20.978967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.051 [2024-12-06 11:29:20.978979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.051 [2024-12-06 11:29:20.978986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.051 [2024-12-06 11:29:20.978992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.051 [2024-12-06 11:29:20.979007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.051 qpair failed and we were unable to recover it. 00:27:48.311 [2024-12-06 11:29:20.988897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.311 [2024-12-06 11:29:20.988952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.311 [2024-12-06 11:29:20.988965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.311 [2024-12-06 11:29:20.988972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.311 [2024-12-06 11:29:20.988979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.311 [2024-12-06 11:29:20.988993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-12-06 11:29:20.998922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.311 [2024-12-06 11:29:20.998976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.311 [2024-12-06 11:29:20.998989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.311 [2024-12-06 11:29:20.998996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.311 [2024-12-06 11:29:20.999002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.311 [2024-12-06 11:29:20.999017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-12-06 11:29:21.008870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.311 [2024-12-06 11:29:21.008928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.311 [2024-12-06 11:29:21.008942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.311 [2024-12-06 11:29:21.008950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.311 [2024-12-06 11:29:21.008956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.008970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.018977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.019030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.019044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.019050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.019056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.019076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.029008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.029066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.029080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.029086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.029092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.029106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.039017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.039091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.039106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.039113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.039118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.039133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.049056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.049106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.049122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.049128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.049134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.049148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.059113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.059181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.059194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.059201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.059208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.059222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.069125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.069179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.069192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.069199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.069205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.069220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.079064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.079122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.079135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.079142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.079147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.079163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.089087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.089149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.089162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.089169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.089178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.089192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.099206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.099259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.099272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.099278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.099284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.099298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.109254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.109334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.109347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.109353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.109359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.109373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-12-06 11:29:21.119268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.312 [2024-12-06 11:29:21.119355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.312 [2024-12-06 11:29:21.119367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.312 [2024-12-06 11:29:21.119374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.312 [2024-12-06 11:29:21.119380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.312 [2024-12-06 11:29:21.119393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.129204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.129259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.129272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.129278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.129284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.129298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.139266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.139358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.139372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.139378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.139384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.139399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.149314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.149367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.149380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.149387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.149392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.149407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.159334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.159419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.159432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.159439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.159444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.159458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.169350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.169397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.169409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.169416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.169422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.169436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.179381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.179441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.179457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.179463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.179469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.179483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.189435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.189489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.189502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.189509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.189514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.189528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.199499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.199550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.199563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.199570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.199575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.199589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.209427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.209477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.209490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.209497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.209503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.209517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.219583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.219664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.219677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.219687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.219693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.219707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.229542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.229591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.313 [2024-12-06 11:29:21.229604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.313 [2024-12-06 11:29:21.229610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.313 [2024-12-06 11:29:21.229616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.313 [2024-12-06 11:29:21.229630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-12-06 11:29:21.239522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.313 [2024-12-06 11:29:21.239604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.314 [2024-12-06 11:29:21.239619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.314 [2024-12-06 11:29:21.239626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.314 [2024-12-06 11:29:21.239632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.314 [2024-12-06 11:29:21.239646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.586 [2024-12-06 11:29:21.249603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.586 [2024-12-06 11:29:21.249654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.586 [2024-12-06 11:29:21.249667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.586 [2024-12-06 11:29:21.249673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.586 [2024-12-06 11:29:21.249679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.586 [2024-12-06 11:29:21.249693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.586 qpair failed and we were unable to recover it. 00:27:48.586 [2024-12-06 11:29:21.259642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.587 [2024-12-06 11:29:21.259712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.587 [2024-12-06 11:29:21.259726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.587 [2024-12-06 11:29:21.259732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.587 [2024-12-06 11:29:21.259739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.587 [2024-12-06 11:29:21.259756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.587 qpair failed and we were unable to recover it. 00:27:48.587 [2024-12-06 11:29:21.269648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.587 [2024-12-06 11:29:21.269724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.587 [2024-12-06 11:29:21.269738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.588 [2024-12-06 11:29:21.269744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.588 [2024-12-06 11:29:21.269750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.588 [2024-12-06 11:29:21.269764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.588 qpair failed and we were unable to recover it. 00:27:48.588 [2024-12-06 11:29:21.279682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.588 [2024-12-06 11:29:21.279735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.588 [2024-12-06 11:29:21.279748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.588 [2024-12-06 11:29:21.279754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.588 [2024-12-06 11:29:21.279760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.588 [2024-12-06 11:29:21.279774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.588 qpair failed and we were unable to recover it. 00:27:48.588 [2024-12-06 11:29:21.289696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.588 [2024-12-06 11:29:21.289750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.589 [2024-12-06 11:29:21.289763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.589 [2024-12-06 11:29:21.289769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.589 [2024-12-06 11:29:21.289775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.589 [2024-12-06 11:29:21.289790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.589 qpair failed and we were unable to recover it. 00:27:48.589 [2024-12-06 11:29:21.299752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.589 [2024-12-06 11:29:21.299812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.589 [2024-12-06 11:29:21.299824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.589 [2024-12-06 11:29:21.299831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.589 [2024-12-06 11:29:21.299837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.589 [2024-12-06 11:29:21.299852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.589 qpair failed and we were unable to recover it. 00:27:48.593 [2024-12-06 11:29:21.309801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.593 [2024-12-06 11:29:21.309877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.594 [2024-12-06 11:29:21.309890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.594 [2024-12-06 11:29:21.309896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.594 [2024-12-06 11:29:21.309902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.594 [2024-12-06 11:29:21.309916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.594 qpair failed and we were unable to recover it. 00:27:48.594 [2024-12-06 11:29:21.319795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.594 [2024-12-06 11:29:21.319844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.594 [2024-12-06 11:29:21.319856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.594 [2024-12-06 11:29:21.319863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.594 [2024-12-06 11:29:21.319869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.595 [2024-12-06 11:29:21.319883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.595 qpair failed and we were unable to recover it. 00:27:48.595 [2024-12-06 11:29:21.329824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.595 [2024-12-06 11:29:21.329873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.595 [2024-12-06 11:29:21.329886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.595 [2024-12-06 11:29:21.329893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.595 [2024-12-06 11:29:21.329898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.595 [2024-12-06 11:29:21.329912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.595 qpair failed and we were unable to recover it. 00:27:48.595 [2024-12-06 11:29:21.339901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.595 [2024-12-06 11:29:21.339957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.595 [2024-12-06 11:29:21.339971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.596 [2024-12-06 11:29:21.339977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.596 [2024-12-06 11:29:21.339983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.596 [2024-12-06 11:29:21.339998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.596 qpair failed and we were unable to recover it. 00:27:48.596 [2024-12-06 11:29:21.349880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.596 [2024-12-06 11:29:21.349931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.596 [2024-12-06 11:29:21.349944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.596 [2024-12-06 11:29:21.349953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.596 [2024-12-06 11:29:21.349959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.596 [2024-12-06 11:29:21.349973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.596 qpair failed and we were unable to recover it. 00:27:48.596 [2024-12-06 11:29:21.359929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.596 [2024-12-06 11:29:21.359995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.596 [2024-12-06 11:29:21.360008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.596 [2024-12-06 11:29:21.360015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.596 [2024-12-06 11:29:21.360021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.596 [2024-12-06 11:29:21.360035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.596 qpair failed and we were unable to recover it. 00:27:48.597 [2024-12-06 11:29:21.369934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.597 [2024-12-06 11:29:21.369984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.597 [2024-12-06 11:29:21.369997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.597 [2024-12-06 11:29:21.370003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.597 [2024-12-06 11:29:21.370009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.597 [2024-12-06 11:29:21.370024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.597 qpair failed and we were unable to recover it. 00:27:48.597 [2024-12-06 11:29:21.379965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.597 [2024-12-06 11:29:21.380015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.597 [2024-12-06 11:29:21.380028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.597 [2024-12-06 11:29:21.380035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.601 [2024-12-06 11:29:21.380040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.601 [2024-12-06 11:29:21.380054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.602 qpair failed and we were unable to recover it. 00:27:48.602 [2024-12-06 11:29:21.390016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.602 [2024-12-06 11:29:21.390083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.602 [2024-12-06 11:29:21.390096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.602 [2024-12-06 11:29:21.390103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.602 [2024-12-06 11:29:21.390109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.602 [2024-12-06 11:29:21.390126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.602 qpair failed and we were unable to recover it. 00:27:48.602 [2024-12-06 11:29:21.399947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.603 [2024-12-06 11:29:21.400002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.603 [2024-12-06 11:29:21.400014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.603 [2024-12-06 11:29:21.400021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.603 [2024-12-06 11:29:21.400026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.603 [2024-12-06 11:29:21.400041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.603 qpair failed and we were unable to recover it. 00:27:48.603 [2024-12-06 11:29:21.410036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.603 [2024-12-06 11:29:21.410109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.604 [2024-12-06 11:29:21.410122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.604 [2024-12-06 11:29:21.410129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.604 [2024-12-06 11:29:21.410134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.604 [2024-12-06 11:29:21.410148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.604 qpair failed and we were unable to recover it. 00:27:48.604 [2024-12-06 11:29:21.420078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.605 [2024-12-06 11:29:21.420131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.605 [2024-12-06 11:29:21.420144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.605 [2024-12-06 11:29:21.420150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.605 [2024-12-06 11:29:21.420156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.605 [2024-12-06 11:29:21.420170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.605 qpair failed and we were unable to recover it. 00:27:48.605 [2024-12-06 11:29:21.430114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.605 [2024-12-06 11:29:21.430169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.605 [2024-12-06 11:29:21.430181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.609 [2024-12-06 11:29:21.430188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.609 [2024-12-06 11:29:21.430194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.609 [2024-12-06 11:29:21.430207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.610 [2024-12-06 11:29:21.440158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.610 [2024-12-06 11:29:21.440207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.610 [2024-12-06 11:29:21.440221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.610 [2024-12-06 11:29:21.440228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.610 [2024-12-06 11:29:21.440234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.610 [2024-12-06 11:29:21.440249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-12-06 11:29:21.450129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.610 [2024-12-06 11:29:21.450186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.610 [2024-12-06 11:29:21.450200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.610 [2024-12-06 11:29:21.450206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.610 [2024-12-06 11:29:21.450212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.610 [2024-12-06 11:29:21.450227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-12-06 11:29:21.460191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.610 [2024-12-06 11:29:21.460243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.610 [2024-12-06 11:29:21.460257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.610 [2024-12-06 11:29:21.460264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.610 [2024-12-06 11:29:21.460269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.611 [2024-12-06 11:29:21.460284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-12-06 11:29:21.470238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.611 [2024-12-06 11:29:21.470290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.611 [2024-12-06 11:29:21.470303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.611 [2024-12-06 11:29:21.470310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.611 [2024-12-06 11:29:21.470316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.611 [2024-12-06 11:29:21.470331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-12-06 11:29:21.480240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.611 [2024-12-06 11:29:21.480290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.611 [2024-12-06 11:29:21.480306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.611 [2024-12-06 11:29:21.480312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.611 [2024-12-06 11:29:21.480318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.611 [2024-12-06 11:29:21.480332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-12-06 11:29:21.490271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.611 [2024-12-06 11:29:21.490325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.612 [2024-12-06 11:29:21.490338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.612 [2024-12-06 11:29:21.490344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.612 [2024-12-06 11:29:21.490351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.612 [2024-12-06 11:29:21.490366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-12-06 11:29:21.500319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.612 [2024-12-06 11:29:21.500382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.612 [2024-12-06 11:29:21.500395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.612 [2024-12-06 11:29:21.500402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.612 [2024-12-06 11:29:21.500408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.612 [2024-12-06 11:29:21.500423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-12-06 11:29:21.510318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.612 [2024-12-06 11:29:21.510371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.612 [2024-12-06 11:29:21.510384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.612 [2024-12-06 11:29:21.510390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.613 [2024-12-06 11:29:21.510396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.613 [2024-12-06 11:29:21.510410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.879 [2024-12-06 11:29:21.520361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.879 [2024-12-06 11:29:21.520413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.879 [2024-12-06 11:29:21.520426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.879 [2024-12-06 11:29:21.520432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.879 [2024-12-06 11:29:21.520441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.879 [2024-12-06 11:29:21.520455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.879 qpair failed and we were unable to recover it. 00:27:48.879 [2024-12-06 11:29:21.530378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.879 [2024-12-06 11:29:21.530429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.879 [2024-12-06 11:29:21.530441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.879 [2024-12-06 11:29:21.530448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.879 [2024-12-06 11:29:21.530453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.879 [2024-12-06 11:29:21.530468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.879 qpair failed and we were unable to recover it. 00:27:48.879 [2024-12-06 11:29:21.540424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.879 [2024-12-06 11:29:21.540496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.879 [2024-12-06 11:29:21.540509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.879 [2024-12-06 11:29:21.540515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.879 [2024-12-06 11:29:21.540521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.879 [2024-12-06 11:29:21.540535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.879 qpair failed and we were unable to recover it. 00:27:48.879 [2024-12-06 11:29:21.550463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.879 [2024-12-06 11:29:21.550542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.879 [2024-12-06 11:29:21.550556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.879 [2024-12-06 11:29:21.550562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.879 [2024-12-06 11:29:21.550568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.879 [2024-12-06 11:29:21.550582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.879 qpair failed and we were unable to recover it. 00:27:48.879 [2024-12-06 11:29:21.560509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.879 [2024-12-06 11:29:21.560564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.879 [2024-12-06 11:29:21.560577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.879 [2024-12-06 11:29:21.560584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.879 [2024-12-06 11:29:21.560589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.879 [2024-12-06 11:29:21.560604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.879 qpair failed and we were unable to recover it. 00:27:48.879 [2024-12-06 11:29:21.570491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.879 [2024-12-06 11:29:21.570541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.879 [2024-12-06 11:29:21.570554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.879 [2024-12-06 11:29:21.570560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.879 [2024-12-06 11:29:21.570566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.879 [2024-12-06 11:29:21.570581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.879 qpair failed and we were unable to recover it. 00:27:48.879 [2024-12-06 11:29:21.580562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.879 [2024-12-06 11:29:21.580665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.879 [2024-12-06 11:29:21.580677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.879 [2024-12-06 11:29:21.580683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.879 [2024-12-06 11:29:21.580689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.879 [2024-12-06 11:29:21.580703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.879 qpair failed and we were unable to recover it. 00:27:48.879 [2024-12-06 11:29:21.590542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.879 [2024-12-06 11:29:21.590596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.590608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.590615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.590620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.590635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.600629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.600694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.600706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.600713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.600718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.600733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.610602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.610652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.610667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.610674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.610680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.610694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.620636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.620694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.620707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.620714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.620721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.620735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.630673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.630722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.630735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.630742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.630747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.630762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.640688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.640739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.640752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.640759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.640765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.640780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.650724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.650774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.650788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.650794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.650802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.650817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.660755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.660806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.660819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.660825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.660831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.660844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.670786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.670836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.670848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.670855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.670860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.670874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.680793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.680847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.680861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.680867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.680873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.680888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.690838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.690889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.690902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.690909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.690915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.690929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.700868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.700944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.700957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.700964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.700969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.700983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.710901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.710956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.710969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.710976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.710981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.710996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.720920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.720995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.721008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.721014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.721019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.721034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.730945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.730996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.731009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.731016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.731022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.731036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.740979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.741035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.741051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.741064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.741070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.741085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.751009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.751070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.751083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.751090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.751096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.751110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.761074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.761128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.761140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.761147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.761153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.761167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.880 [2024-12-06 11:29:21.771050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.880 [2024-12-06 11:29:21.771107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.880 [2024-12-06 11:29:21.771119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.880 [2024-12-06 11:29:21.771126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.880 [2024-12-06 11:29:21.771132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.880 [2024-12-06 11:29:21.771146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.880 qpair failed and we were unable to recover it. 00:27:48.881 [2024-12-06 11:29:21.781080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.881 [2024-12-06 11:29:21.781133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.881 [2024-12-06 11:29:21.781146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.881 [2024-12-06 11:29:21.781156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.881 [2024-12-06 11:29:21.781162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.881 [2024-12-06 11:29:21.781177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.881 qpair failed and we were unable to recover it. 00:27:48.881 [2024-12-06 11:29:21.791125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.881 [2024-12-06 11:29:21.791177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.881 [2024-12-06 11:29:21.791190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.881 [2024-12-06 11:29:21.791197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.881 [2024-12-06 11:29:21.791203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.881 [2024-12-06 11:29:21.791218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.881 qpair failed and we were unable to recover it. 00:27:48.881 [2024-12-06 11:29:21.801190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.881 [2024-12-06 11:29:21.801239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.881 [2024-12-06 11:29:21.801252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.881 [2024-12-06 11:29:21.801258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.881 [2024-12-06 11:29:21.801264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.881 [2024-12-06 11:29:21.801278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.881 qpair failed and we were unable to recover it. 00:27:48.881 [2024-12-06 11:29:21.811184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.881 [2024-12-06 11:29:21.811232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.881 [2024-12-06 11:29:21.811245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.881 [2024-12-06 11:29:21.811252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.881 [2024-12-06 11:29:21.811258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:48.881 [2024-12-06 11:29:21.811272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.881 qpair failed and we were unable to recover it. 00:27:49.140 [2024-12-06 11:29:21.821215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.140 [2024-12-06 11:29:21.821276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.140 [2024-12-06 11:29:21.821288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.140 [2024-12-06 11:29:21.821295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.140 [2024-12-06 11:29:21.821300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.140 [2024-12-06 11:29:21.821317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.140 qpair failed and we were unable to recover it. 00:27:49.140 [2024-12-06 11:29:21.831237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.140 [2024-12-06 11:29:21.831289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.831302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.831308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.141 [2024-12-06 11:29:21.831314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.141 [2024-12-06 11:29:21.831328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.141 qpair failed and we were unable to recover it. 00:27:49.141 [2024-12-06 11:29:21.841241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.141 [2024-12-06 11:29:21.841289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.841303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.841310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.141 [2024-12-06 11:29:21.841315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.141 [2024-12-06 11:29:21.841329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.141 qpair failed and we were unable to recover it. 00:27:49.141 [2024-12-06 11:29:21.851286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.141 [2024-12-06 11:29:21.851334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.851348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.851354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.141 [2024-12-06 11:29:21.851360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.141 [2024-12-06 11:29:21.851374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.141 qpair failed and we were unable to recover it. 00:27:49.141 [2024-12-06 11:29:21.861325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.141 [2024-12-06 11:29:21.861377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.861389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.861396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.141 [2024-12-06 11:29:21.861402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.141 [2024-12-06 11:29:21.861416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.141 qpair failed and we were unable to recover it. 00:27:49.141 [2024-12-06 11:29:21.871345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.141 [2024-12-06 11:29:21.871404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.871417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.871424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.141 [2024-12-06 11:29:21.871430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.141 [2024-12-06 11:29:21.871444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.141 qpair failed and we were unable to recover it. 00:27:49.141 [2024-12-06 11:29:21.881367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.141 [2024-12-06 11:29:21.881457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.881470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.881477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.141 [2024-12-06 11:29:21.881483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.141 [2024-12-06 11:29:21.881497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.141 qpair failed and we were unable to recover it. 00:27:49.141 [2024-12-06 11:29:21.891399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.141 [2024-12-06 11:29:21.891447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.891460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.891467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.141 [2024-12-06 11:29:21.891472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.141 [2024-12-06 11:29:21.891487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.141 qpair failed and we were unable to recover it. 00:27:49.141 [2024-12-06 11:29:21.901484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.141 [2024-12-06 11:29:21.901540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.901553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.901560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.141 [2024-12-06 11:29:21.901565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.141 [2024-12-06 11:29:21.901580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.141 qpair failed and we were unable to recover it. 00:27:49.141 [2024-12-06 11:29:21.911472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.141 [2024-12-06 11:29:21.911521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.911534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.911543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.141 [2024-12-06 11:29:21.911549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.141 [2024-12-06 11:29:21.911564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.141 qpair failed and we were unable to recover it. 00:27:49.141 [2024-12-06 11:29:21.921513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.141 [2024-12-06 11:29:21.921567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.141 [2024-12-06 11:29:21.921580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.141 [2024-12-06 11:29:21.921586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:21.921592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:21.921607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:21.931500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:21.931551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:21.931564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:21.931570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:21.931575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:21.931590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:21.941540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:21.941592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:21.941606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:21.941612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:21.941618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:21.941632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:21.951560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:21.951612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:21.951625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:21.951631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:21.951637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:21.951654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:21.961598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:21.961682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:21.961695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:21.961702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:21.961707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:21.961721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:21.971656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:21.971759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:21.971772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:21.971779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:21.971784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:21.971798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:21.981705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:21.981803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:21.981816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:21.981823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:21.981829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:21.981843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:21.991695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:21.991776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:21.991789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:21.991796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:21.991802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:21.991816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:22.001705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:22.001757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:22.001770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:22.001777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:22.001782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:22.001797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:22.011726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:22.011780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:22.011793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:22.011800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:22.011806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:22.011821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:22.021759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:22.021820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:22.021833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:22.021840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:22.021846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:22.021860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.142 [2024-12-06 11:29:22.031789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.142 [2024-12-06 11:29:22.031843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.142 [2024-12-06 11:29:22.031856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.142 [2024-12-06 11:29:22.031863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.142 [2024-12-06 11:29:22.031869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.142 [2024-12-06 11:29:22.031883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.142 qpair failed and we were unable to recover it. 00:27:49.143 [2024-12-06 11:29:22.041811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.143 [2024-12-06 11:29:22.041859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.143 [2024-12-06 11:29:22.041876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.143 [2024-12-06 11:29:22.041883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.143 [2024-12-06 11:29:22.041889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.143 [2024-12-06 11:29:22.041903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.143 qpair failed and we were unable to recover it. 00:27:49.143 [2024-12-06 11:29:22.051832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.143 [2024-12-06 11:29:22.051882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.143 [2024-12-06 11:29:22.051895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.143 [2024-12-06 11:29:22.051902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.143 [2024-12-06 11:29:22.051907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.143 [2024-12-06 11:29:22.051922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.143 qpair failed and we were unable to recover it. 00:27:49.143 [2024-12-06 11:29:22.061871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.143 [2024-12-06 11:29:22.061923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.143 [2024-12-06 11:29:22.061936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.143 [2024-12-06 11:29:22.061943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.143 [2024-12-06 11:29:22.061948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.143 [2024-12-06 11:29:22.061963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.143 qpair failed and we were unable to recover it. 00:27:49.143 [2024-12-06 11:29:22.071898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.143 [2024-12-06 11:29:22.071953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.143 [2024-12-06 11:29:22.071965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.143 [2024-12-06 11:29:22.071972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.143 [2024-12-06 11:29:22.071977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.143 [2024-12-06 11:29:22.071992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.143 qpair failed and we were unable to recover it. 00:27:49.403 [2024-12-06 11:29:22.081918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.403 [2024-12-06 11:29:22.081964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.403 [2024-12-06 11:29:22.081977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.403 [2024-12-06 11:29:22.081984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.403 [2024-12-06 11:29:22.081992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.403 [2024-12-06 11:29:22.082006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.403 qpair failed and we were unable to recover it. 00:27:49.403 [2024-12-06 11:29:22.091958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.403 [2024-12-06 11:29:22.092026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.403 [2024-12-06 11:29:22.092040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.403 [2024-12-06 11:29:22.092046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.403 [2024-12-06 11:29:22.092052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.403 [2024-12-06 11:29:22.092071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.403 qpair failed and we were unable to recover it. 00:27:49.403 [2024-12-06 11:29:22.101989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.403 [2024-12-06 11:29:22.102041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.403 [2024-12-06 11:29:22.102054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.403 [2024-12-06 11:29:22.102065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.403 [2024-12-06 11:29:22.102071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.403 [2024-12-06 11:29:22.102085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.403 qpair failed and we were unable to recover it. 00:27:49.403 [2024-12-06 11:29:22.112040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.403 [2024-12-06 11:29:22.112099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.403 [2024-12-06 11:29:22.112112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.403 [2024-12-06 11:29:22.112118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.403 [2024-12-06 11:29:22.112124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.403 [2024-12-06 11:29:22.112138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.403 qpair failed and we were unable to recover it. 00:27:49.403 [2024-12-06 11:29:22.122032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.403 [2024-12-06 11:29:22.122089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.403 [2024-12-06 11:29:22.122103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.403 [2024-12-06 11:29:22.122109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.403 [2024-12-06 11:29:22.122115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.403 [2024-12-06 11:29:22.122129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.403 qpair failed and we were unable to recover it. 00:27:49.403 [2024-12-06 11:29:22.132060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.403 [2024-12-06 11:29:22.132110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.403 [2024-12-06 11:29:22.132123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.403 [2024-12-06 11:29:22.132130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.403 [2024-12-06 11:29:22.132135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.403 [2024-12-06 11:29:22.132151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.403 qpair failed and we were unable to recover it. 00:27:49.403 [2024-12-06 11:29:22.142179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.403 [2024-12-06 11:29:22.142228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.403 [2024-12-06 11:29:22.142241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.142248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.142253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.142267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.152114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.152165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.152178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.152185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.152191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.152206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.162151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.162217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.162230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.162237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.162243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.162257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.172125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.172222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.172241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.172249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.172254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.172271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.182204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.182256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.182270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.182277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.182283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.182298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.192214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.192266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.192279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.192286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.192291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.192306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.202281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.202335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.202347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.202354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.202360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.202376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.212301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.212351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.212364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.212370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.212379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.212393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.222329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.222378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.222391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.222398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.222404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.222419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.232335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.232416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.232429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.232436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.232442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.232456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.242368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.242419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.242432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.242438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.242444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.242459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.404 [2024-12-06 11:29:22.252393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.404 [2024-12-06 11:29:22.252445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.404 [2024-12-06 11:29:22.252458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.404 [2024-12-06 11:29:22.252464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.404 [2024-12-06 11:29:22.252470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.404 [2024-12-06 11:29:22.252485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.404 qpair failed and we were unable to recover it. 00:27:49.405 [2024-12-06 11:29:22.262429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.405 [2024-12-06 11:29:22.262484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.405 [2024-12-06 11:29:22.262497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.405 [2024-12-06 11:29:22.262504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.405 [2024-12-06 11:29:22.262510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.405 [2024-12-06 11:29:22.262524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.405 qpair failed and we were unable to recover it. 00:27:49.405 [2024-12-06 11:29:22.272453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.405 [2024-12-06 11:29:22.272504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.405 [2024-12-06 11:29:22.272516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.405 [2024-12-06 11:29:22.272523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.405 [2024-12-06 11:29:22.272529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.405 [2024-12-06 11:29:22.272543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.405 qpair failed and we were unable to recover it. 00:27:49.405 [2024-12-06 11:29:22.282527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.405 [2024-12-06 11:29:22.282622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.405 [2024-12-06 11:29:22.282635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.405 [2024-12-06 11:29:22.282642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.405 [2024-12-06 11:29:22.282648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.405 [2024-12-06 11:29:22.282662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.405 qpair failed and we were unable to recover it. 00:27:49.405 [2024-12-06 11:29:22.292507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.405 [2024-12-06 11:29:22.292559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.405 [2024-12-06 11:29:22.292572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.405 [2024-12-06 11:29:22.292579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.405 [2024-12-06 11:29:22.292585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.405 [2024-12-06 11:29:22.292599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.405 qpair failed and we were unable to recover it. 00:27:49.405 [2024-12-06 11:29:22.302544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.405 [2024-12-06 11:29:22.302635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.405 [2024-12-06 11:29:22.302648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.405 [2024-12-06 11:29:22.302657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.405 [2024-12-06 11:29:22.302664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.405 [2024-12-06 11:29:22.302678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.405 qpair failed and we were unable to recover it. 00:27:49.405 [2024-12-06 11:29:22.312562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.405 [2024-12-06 11:29:22.312611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.405 [2024-12-06 11:29:22.312623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.405 [2024-12-06 11:29:22.312630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.405 [2024-12-06 11:29:22.312635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.405 [2024-12-06 11:29:22.312649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.405 qpair failed and we were unable to recover it. 00:27:49.405 [2024-12-06 11:29:22.322607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.405 [2024-12-06 11:29:22.322692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.405 [2024-12-06 11:29:22.322704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.405 [2024-12-06 11:29:22.322711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.405 [2024-12-06 11:29:22.322717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.405 [2024-12-06 11:29:22.322730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.405 qpair failed and we were unable to recover it. 00:27:49.405 [2024-12-06 11:29:22.332644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.405 [2024-12-06 11:29:22.332690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.405 [2024-12-06 11:29:22.332703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.405 [2024-12-06 11:29:22.332710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.405 [2024-12-06 11:29:22.332715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.405 [2024-12-06 11:29:22.332730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.405 qpair failed and we were unable to recover it. 00:27:49.665 [2024-12-06 11:29:22.342664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.665 [2024-12-06 11:29:22.342720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.665 [2024-12-06 11:29:22.342734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.665 [2024-12-06 11:29:22.342743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.665 [2024-12-06 11:29:22.342749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.665 [2024-12-06 11:29:22.342764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.665 qpair failed and we were unable to recover it. 00:27:49.665 [2024-12-06 11:29:22.352694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.665 [2024-12-06 11:29:22.352745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.352758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.352765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.352770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.352784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.362745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.362799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.362813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.362819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.362825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.362840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.372703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.372760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.372772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.372779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.372785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.372800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.382781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.382835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.382848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.382855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.382860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.382878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.392844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.392898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.392911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.392918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.392924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.392938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.402826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.402878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.402891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.402897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.402903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.402918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.412850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.412922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.412935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.412941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.412947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.412961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.422889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.422944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.422957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.422964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.422969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.422984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.432933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.432987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.433000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.433007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.433013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.433026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.442949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.442996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.443009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.443016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.443022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.443036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.452965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.453013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.453027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.453034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.453039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.453054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.463004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.463063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.463076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.463083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.666 [2024-12-06 11:29:22.463089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.666 [2024-12-06 11:29:22.463104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.666 qpair failed and we were unable to recover it. 00:27:49.666 [2024-12-06 11:29:22.473019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.666 [2024-12-06 11:29:22.473077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.666 [2024-12-06 11:29:22.473090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.666 [2024-12-06 11:29:22.473100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.473106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.473121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.483102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.483199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.483212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.483219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.483226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.483240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.493070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.493120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.493133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.493140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.493146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.493160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.503106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.503161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.503174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.503180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.503186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.503201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.513133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.513194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.513207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.513214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.513220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.513236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.523084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.523136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.523149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.523155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.523161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.523175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.533175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.533229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.533241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.533248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.533253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.533268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.543212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.543263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.543277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.543283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.543289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.543303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.553246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.553300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.553313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.553319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.553325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.553338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.563302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.563353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.563367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.563374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.563379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.563393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.573216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.573310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.573323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.573330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.573336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.573350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.583293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.583361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.583373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.583380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.583386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.583400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.667 [2024-12-06 11:29:22.593351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.667 [2024-12-06 11:29:22.593409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.667 [2024-12-06 11:29:22.593422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.667 [2024-12-06 11:29:22.593428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.667 [2024-12-06 11:29:22.593434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.667 [2024-12-06 11:29:22.593449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.667 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.603304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.603356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.603372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.603379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.603385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.603399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.613347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.613394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.613407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.613414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.613420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.613435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.623428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.623480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.623493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.623500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.623506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.623520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.633387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.633442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.633455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.633462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.633468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.633481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.643503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.643554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.643567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.643574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.643583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.643597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.653516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.653566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.653579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.653586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.653592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.653606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.663575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.663684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.663696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.663703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.663709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.663724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.673579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.673629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.673642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.673649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.673654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.673669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.683549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.683601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.683614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.683620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.683626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.683640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.928 [2024-12-06 11:29:22.693560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.928 [2024-12-06 11:29:22.693620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.928 [2024-12-06 11:29:22.693634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.928 [2024-12-06 11:29:22.693641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.928 [2024-12-06 11:29:22.693646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.928 [2024-12-06 11:29:22.693661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.928 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.703607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.703702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.703714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.703721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.703726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.703740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.713699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.713753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.713766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.713772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.713778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.713792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.723724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.723771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.723784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.723790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.723796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.723810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.733683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.733734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.733751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.733757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.733763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.733778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.743789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.743846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.743859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.743866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.743872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.743886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.753799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.753852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.753865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.753871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.753877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.753892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.763780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.763827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.763840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.763847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.763852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.763866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.773880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.773928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.773942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.773948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.773957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.773971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.783918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.783984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.783998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.784006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.784012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.784026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.793937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.794016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.794029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.794036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.794042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.794056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.803957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.804008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.804021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.929 [2024-12-06 11:29:22.804028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.929 [2024-12-06 11:29:22.804033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.929 [2024-12-06 11:29:22.804048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.929 qpair failed and we were unable to recover it. 00:27:49.929 [2024-12-06 11:29:22.813985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.929 [2024-12-06 11:29:22.814037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.929 [2024-12-06 11:29:22.814050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.930 [2024-12-06 11:29:22.814057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.930 [2024-12-06 11:29:22.814068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.930 [2024-12-06 11:29:22.814083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.930 qpair failed and we were unable to recover it. 00:27:49.930 [2024-12-06 11:29:22.823952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.930 [2024-12-06 11:29:22.824040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.930 [2024-12-06 11:29:22.824053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.930 [2024-12-06 11:29:22.824065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.930 [2024-12-06 11:29:22.824072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.930 [2024-12-06 11:29:22.824087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.930 qpair failed and we were unable to recover it. 00:27:49.930 [2024-12-06 11:29:22.834048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.930 [2024-12-06 11:29:22.834124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.930 [2024-12-06 11:29:22.834137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.930 [2024-12-06 11:29:22.834143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.930 [2024-12-06 11:29:22.834149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.930 [2024-12-06 11:29:22.834164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.930 qpair failed and we were unable to recover it. 00:27:49.930 [2024-12-06 11:29:22.844074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.930 [2024-12-06 11:29:22.844126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.930 [2024-12-06 11:29:22.844140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.930 [2024-12-06 11:29:22.844146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.930 [2024-12-06 11:29:22.844152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.930 [2024-12-06 11:29:22.844167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.930 qpair failed and we were unable to recover it. 00:27:49.930 [2024-12-06 11:29:22.854033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.930 [2024-12-06 11:29:22.854120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.930 [2024-12-06 11:29:22.854134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.930 [2024-12-06 11:29:22.854140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.930 [2024-12-06 11:29:22.854146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:49.930 [2024-12-06 11:29:22.854160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.930 qpair failed and we were unable to recover it. 00:27:50.190 [2024-12-06 11:29:22.864139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.190 [2024-12-06 11:29:22.864204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.190 [2024-12-06 11:29:22.864217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.190 [2024-12-06 11:29:22.864223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.190 [2024-12-06 11:29:22.864229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.190 [2024-12-06 11:29:22.864243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.190 qpair failed and we were unable to recover it. 00:27:50.190 [2024-12-06 11:29:22.874150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.190 [2024-12-06 11:29:22.874204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.190 [2024-12-06 11:29:22.874217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.190 [2024-12-06 11:29:22.874224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.190 [2024-12-06 11:29:22.874229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.190 [2024-12-06 11:29:22.874244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.190 qpair failed and we were unable to recover it. 00:27:50.190 [2024-12-06 11:29:22.884110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.884165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.884178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.884184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.884191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.884206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.894204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.894254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.894266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.894273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.894279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.894293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.904290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.904390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.904403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.904412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.904418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.904432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.914227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.914284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.914296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.914303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.914308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.914322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.924310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.924360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.924373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.924379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.924385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.924400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.934262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.934324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.934337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.934344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.934350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.934364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.944371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.944424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.944437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.944444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.944449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.944467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.954394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.954445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.954458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.954465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.954470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.954485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.964415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.964493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.964507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.964513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.964519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.964533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.974450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.974498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.974510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.974517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.974523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.974537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.984473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.984526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.984539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.984545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.191 [2024-12-06 11:29:22.984551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.191 [2024-12-06 11:29:22.984565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.191 qpair failed and we were unable to recover it. 00:27:50.191 [2024-12-06 11:29:22.994485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.191 [2024-12-06 11:29:22.994540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.191 [2024-12-06 11:29:22.994552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.191 [2024-12-06 11:29:22.994559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:22.994565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:22.994578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.004592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.004665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.004678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.004684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.004690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.004704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.014563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.014621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.014634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.014640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.014646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.014660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.024597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.024651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.024664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.024671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.024677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.024691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.034623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.034679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.034695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.034702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.034708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.034722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.044651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.044703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.044717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.044724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.044730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.044744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.054707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.054783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.054796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.054803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.054808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.054822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.064714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.064782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.064796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.064803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.064808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.064822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.074737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.074788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.074800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.074807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.074813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.074832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.084762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.084813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.084826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.084832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.084838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.084852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.094796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.094846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.094859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.094866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.094871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.094886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.104832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.192 [2024-12-06 11:29:23.104884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.192 [2024-12-06 11:29:23.104897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.192 [2024-12-06 11:29:23.104904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.192 [2024-12-06 11:29:23.104909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.192 [2024-12-06 11:29:23.104924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.192 qpair failed and we were unable to recover it. 00:27:50.192 [2024-12-06 11:29:23.114851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.193 [2024-12-06 11:29:23.114904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.193 [2024-12-06 11:29:23.114917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.193 [2024-12-06 11:29:23.114923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.193 [2024-12-06 11:29:23.114929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.193 [2024-12-06 11:29:23.114943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.193 qpair failed and we were unable to recover it. 00:27:50.193 [2024-12-06 11:29:23.124903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.193 [2024-12-06 11:29:23.124955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.193 [2024-12-06 11:29:23.124967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.193 [2024-12-06 11:29:23.124974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.193 [2024-12-06 11:29:23.124980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.193 [2024-12-06 11:29:23.124994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.193 qpair failed and we were unable to recover it. 00:27:50.454 [2024-12-06 11:29:23.134941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.454 [2024-12-06 11:29:23.135022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.454 [2024-12-06 11:29:23.135035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.454 [2024-12-06 11:29:23.135042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.454 [2024-12-06 11:29:23.135048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.454 [2024-12-06 11:29:23.135066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.454 qpair failed and we were unable to recover it. 00:27:50.454 [2024-12-06 11:29:23.144988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.454 [2024-12-06 11:29:23.145062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.454 [2024-12-06 11:29:23.145077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.454 [2024-12-06 11:29:23.145084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.454 [2024-12-06 11:29:23.145090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.454 [2024-12-06 11:29:23.145106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.454 qpair failed and we were unable to recover it. 00:27:50.454 [2024-12-06 11:29:23.154884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.454 [2024-12-06 11:29:23.154941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.454 [2024-12-06 11:29:23.154953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.454 [2024-12-06 11:29:23.154960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.454 [2024-12-06 11:29:23.154966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.454 [2024-12-06 11:29:23.154981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.454 qpair failed and we were unable to recover it. 00:27:50.454 [2024-12-06 11:29:23.164991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.454 [2024-12-06 11:29:23.165044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.454 [2024-12-06 11:29:23.165064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.454 [2024-12-06 11:29:23.165071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.454 [2024-12-06 11:29:23.165077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.454 [2024-12-06 11:29:23.165092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.454 qpair failed and we were unable to recover it. 00:27:50.454 [2024-12-06 11:29:23.175075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.454 [2024-12-06 11:29:23.175127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.454 [2024-12-06 11:29:23.175139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.454 [2024-12-06 11:29:23.175146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.454 [2024-12-06 11:29:23.175151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.454 [2024-12-06 11:29:23.175166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.454 qpair failed and we were unable to recover it. 00:27:50.454 [2024-12-06 11:29:23.185055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.454 [2024-12-06 11:29:23.185111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.454 [2024-12-06 11:29:23.185125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.454 [2024-12-06 11:29:23.185131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.454 [2024-12-06 11:29:23.185137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.454 [2024-12-06 11:29:23.185151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.454 qpair failed and we were unable to recover it. 00:27:50.454 [2024-12-06 11:29:23.195084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.454 [2024-12-06 11:29:23.195161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.454 [2024-12-06 11:29:23.195174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.454 [2024-12-06 11:29:23.195180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.454 [2024-12-06 11:29:23.195186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.454 [2024-12-06 11:29:23.195200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.454 qpair failed and we were unable to recover it. 00:27:50.454 [2024-12-06 11:29:23.205105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.454 [2024-12-06 11:29:23.205200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.454 [2024-12-06 11:29:23.205213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.454 [2024-12-06 11:29:23.205220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.205229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.205244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.215131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.215221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.215234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.215240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.215246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.215260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.225097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.225148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.225161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.225168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.225174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.225188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.235218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.235275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.235288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.235294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.235300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.235315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.245240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.245292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.245306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.245312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.245319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.245334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.255238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.255289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.255301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.255308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.255314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.255328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.265276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.265328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.265342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.265348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.265354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.265368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.275292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.275341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.275353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.275359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.275365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.275380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.285379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.285433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.285446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.285453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.285459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.285473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.295283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.295365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.295382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.295388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.295394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.295408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.305383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.305436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.305449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.305455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.305461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.305476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.455 qpair failed and we were unable to recover it. 00:27:50.455 [2024-12-06 11:29:23.315336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.455 [2024-12-06 11:29:23.315393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.455 [2024-12-06 11:29:23.315406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.455 [2024-12-06 11:29:23.315413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.455 [2024-12-06 11:29:23.315419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.455 [2024-12-06 11:29:23.315433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.456 qpair failed and we were unable to recover it. 00:27:50.456 [2024-12-06 11:29:23.325422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.456 [2024-12-06 11:29:23.325476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.456 [2024-12-06 11:29:23.325489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.456 [2024-12-06 11:29:23.325495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.456 [2024-12-06 11:29:23.325501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.456 [2024-12-06 11:29:23.325515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.456 qpair failed and we were unable to recover it. 00:27:50.456 [2024-12-06 11:29:23.335391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.456 [2024-12-06 11:29:23.335444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.456 [2024-12-06 11:29:23.335457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.456 [2024-12-06 11:29:23.335466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.456 [2024-12-06 11:29:23.335472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.456 [2024-12-06 11:29:23.335486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.456 qpair failed and we were unable to recover it. 00:27:50.456 [2024-12-06 11:29:23.345427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.456 [2024-12-06 11:29:23.345479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.456 [2024-12-06 11:29:23.345493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.456 [2024-12-06 11:29:23.345500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.456 [2024-12-06 11:29:23.345506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.456 [2024-12-06 11:29:23.345520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.456 qpair failed and we were unable to recover it. 00:27:50.456 [2024-12-06 11:29:23.355511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.456 [2024-12-06 11:29:23.355565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.456 [2024-12-06 11:29:23.355578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.456 [2024-12-06 11:29:23.355584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.456 [2024-12-06 11:29:23.355590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.456 [2024-12-06 11:29:23.355604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.456 qpair failed and we were unable to recover it. 00:27:50.456 [2024-12-06 11:29:23.365535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.456 [2024-12-06 11:29:23.365588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.456 [2024-12-06 11:29:23.365601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.456 [2024-12-06 11:29:23.365607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.456 [2024-12-06 11:29:23.365612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.456 [2024-12-06 11:29:23.365627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.456 qpair failed and we were unable to recover it. 00:27:50.456 [2024-12-06 11:29:23.375563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.456 [2024-12-06 11:29:23.375659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.456 [2024-12-06 11:29:23.375672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.456 [2024-12-06 11:29:23.375678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.456 [2024-12-06 11:29:23.375684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.456 [2024-12-06 11:29:23.375698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.456 qpair failed and we were unable to recover it. 00:27:50.456 [2024-12-06 11:29:23.385602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.456 [2024-12-06 11:29:23.385659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.456 [2024-12-06 11:29:23.385671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.456 [2024-12-06 11:29:23.385677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.456 [2024-12-06 11:29:23.385683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.456 [2024-12-06 11:29:23.385697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.456 qpair failed and we were unable to recover it. 00:27:50.716 [2024-12-06 11:29:23.395634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.716 [2024-12-06 11:29:23.395689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.716 [2024-12-06 11:29:23.395702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.716 [2024-12-06 11:29:23.395708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.716 [2024-12-06 11:29:23.395714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.716 [2024-12-06 11:29:23.395728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.716 qpair failed and we were unable to recover it. 00:27:50.716 [2024-12-06 11:29:23.405722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.716 [2024-12-06 11:29:23.405819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.716 [2024-12-06 11:29:23.405832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.716 [2024-12-06 11:29:23.405838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.716 [2024-12-06 11:29:23.405843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.716 [2024-12-06 11:29:23.405857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.716 qpair failed and we were unable to recover it. 00:27:50.716 [2024-12-06 11:29:23.415662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.716 [2024-12-06 11:29:23.415754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.716 [2024-12-06 11:29:23.415767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.415774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.415779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.415793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.425743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.425805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.425820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.425827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.425834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.425850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.435785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.435863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.435876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.435883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.435889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.435904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.445767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.445828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.445842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.445849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.445855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.445870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.455787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.455836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.455849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.455856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.455861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.455876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.465817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.465870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.465884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.465893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.465901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.465916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.475849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.475899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.475912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.475919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.475924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.475940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.485864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.485919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.485933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.485939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.485945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.485960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.495904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.495959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.495972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.495978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.495985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.495999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.505894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.505989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.506003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.506010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.506015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.506032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.515964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.516015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.516028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.516034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.516040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.516055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.717 [2024-12-06 11:29:23.525908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.717 [2024-12-06 11:29:23.525961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.717 [2024-12-06 11:29:23.525973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.717 [2024-12-06 11:29:23.525980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.717 [2024-12-06 11:29:23.525986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.717 [2024-12-06 11:29:23.526000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.717 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.536024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.536078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.536092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.536098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.536105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.536120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.546031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.546088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.546102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.546108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.546114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.546128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.556080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.556167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.556180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.556187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.556193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.556206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.566102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.566150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.566163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.566170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.566175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.566190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.576124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.576174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.576187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.576194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.576200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.576214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.586160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.586213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.586226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.586233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.586239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.586253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.596188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.596243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.596259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.596265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.596271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.596285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.606213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.606268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.606281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.606288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.606294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.606308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.616255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.616309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.616321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.616328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.616334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.616348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.626262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.626317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.626330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.626336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.626342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.626356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.636293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.636346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.636359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.636365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.718 [2024-12-06 11:29:23.636370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.718 [2024-12-06 11:29:23.636388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.718 qpair failed and we were unable to recover it. 00:27:50.718 [2024-12-06 11:29:23.646343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.718 [2024-12-06 11:29:23.646405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.718 [2024-12-06 11:29:23.646419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.718 [2024-12-06 11:29:23.646425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.719 [2024-12-06 11:29:23.646430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.719 [2024-12-06 11:29:23.646445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.719 qpair failed and we were unable to recover it. 00:27:50.979 [2024-12-06 11:29:23.656345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.979 [2024-12-06 11:29:23.656397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.979 [2024-12-06 11:29:23.656409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.979 [2024-12-06 11:29:23.656416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.979 [2024-12-06 11:29:23.656421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.979 [2024-12-06 11:29:23.656436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.979 qpair failed and we were unable to recover it. 00:27:50.979 [2024-12-06 11:29:23.666380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.979 [2024-12-06 11:29:23.666434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.979 [2024-12-06 11:29:23.666446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.979 [2024-12-06 11:29:23.666453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.979 [2024-12-06 11:29:23.666458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.979 [2024-12-06 11:29:23.666473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.979 qpair failed and we were unable to recover it. 00:27:50.979 [2024-12-06 11:29:23.676409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.979 [2024-12-06 11:29:23.676458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.979 [2024-12-06 11:29:23.676471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.979 [2024-12-06 11:29:23.676477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.979 [2024-12-06 11:29:23.676483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.979 [2024-12-06 11:29:23.676497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.979 qpair failed and we were unable to recover it. 00:27:50.979 [2024-12-06 11:29:23.686440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.686527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.686539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.686546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.686552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.686566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.696460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.696518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.696530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.696537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.696544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.696557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.706493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.706548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.706561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.706567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.706573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.706587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.716504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.716555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.716568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.716574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.716580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.716594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.726540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.726589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.726604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.726611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.726617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.726631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.736564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.736614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.736627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.736634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.736639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.736654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.746630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.746685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.746698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.746705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.746711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.746726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.756634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.756682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.756695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.756702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.756708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.756723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.766655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.766710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.766722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.766729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.766737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.766752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.776655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.776713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.776726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.776732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.776738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.776753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.786719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.786769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.786782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.786788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.980 [2024-12-06 11:29:23.786794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.980 [2024-12-06 11:29:23.786808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.980 qpair failed and we were unable to recover it. 00:27:50.980 [2024-12-06 11:29:23.796741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.980 [2024-12-06 11:29:23.796833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.980 [2024-12-06 11:29:23.796845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.980 [2024-12-06 11:29:23.796852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.796857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.796871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.806797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.806849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.806861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.806867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.806873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.806887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.816741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.816838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.816850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.816857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.816863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.816877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.826865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.826923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.826937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.826943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.826949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.826963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.836862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.836916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.836928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.836935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.836941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.836955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.846891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.846943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.846956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.846962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.846969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.846983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.856964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.857025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.857041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.857048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.857053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.857072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.866930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.866991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.867004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.867010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.867016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.867030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.876969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.877036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.877048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.877055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.877065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.877079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.886992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.887048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.887065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.887072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.887077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.887091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.897025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.897075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.897088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.897097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.897104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.897119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:50.981 [2024-12-06 11:29:23.907054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.981 [2024-12-06 11:29:23.907131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.981 [2024-12-06 11:29:23.907144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.981 [2024-12-06 11:29:23.907151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.981 [2024-12-06 11:29:23.907157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:50.981 [2024-12-06 11:29:23.907171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.981 qpair failed and we were unable to recover it. 00:27:51.241 [2024-12-06 11:29:23.917087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.241 [2024-12-06 11:29:23.917136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.241 [2024-12-06 11:29:23.917148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.241 [2024-12-06 11:29:23.917155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.241 [2024-12-06 11:29:23.917160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:51.241 [2024-12-06 11:29:23.917174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.241 qpair failed and we were unable to recover it. 00:27:51.241 [2024-12-06 11:29:23.927114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.241 [2024-12-06 11:29:23.927164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.241 [2024-12-06 11:29:23.927177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.241 [2024-12-06 11:29:23.927184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.241 [2024-12-06 11:29:23.927189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:51.241 [2024-12-06 11:29:23.927204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.241 qpair failed and we were unable to recover it. 00:27:51.241 [2024-12-06 11:29:23.937154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.241 [2024-12-06 11:29:23.937212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.241 [2024-12-06 11:29:23.937225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.241 [2024-12-06 11:29:23.937231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.241 [2024-12-06 11:29:23.937238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:51.241 [2024-12-06 11:29:23.937252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.241 qpair failed and we were unable to recover it. 00:27:51.241 [2024-12-06 11:29:23.947137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.241 [2024-12-06 11:29:23.947190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.241 [2024-12-06 11:29:23.947203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.241 [2024-12-06 11:29:23.947210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.241 [2024-12-06 11:29:23.947216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:51.241 [2024-12-06 11:29:23.947231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.241 qpair failed and we were unable to recover it. 00:27:51.241 [2024-12-06 11:29:23.957225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.241 [2024-12-06 11:29:23.957317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.241 [2024-12-06 11:29:23.957330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.241 [2024-12-06 11:29:23.957336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.241 [2024-12-06 11:29:23.957342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d40000b90 00:27:51.241 [2024-12-06 11:29:23.957356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.241 qpair failed and we were unable to recover it. 00:27:51.241 [2024-12-06 11:29:23.967255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.241 [2024-12-06 11:29:23.967354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.241 [2024-12-06 11:29:23.967408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.241 [2024-12-06 11:29:23.967434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.241 [2024-12-06 11:29:23.967454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d44000b90 00:27:51.241 [2024-12-06 11:29:23.967505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:51.241 qpair failed and we were unable to recover it. 00:27:51.241 [2024-12-06 11:29:23.977276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.241 [2024-12-06 11:29:23.977361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.241 [2024-12-06 11:29:23.977391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.241 [2024-12-06 11:29:23.977406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.241 [2024-12-06 11:29:23.977419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d44000b90 00:27:51.241 [2024-12-06 11:29:23.977449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:51.241 qpair failed and we were unable to recover it. 00:27:51.241 [2024-12-06 11:29:23.987320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.242 [2024-12-06 11:29:23.987428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.242 [2024-12-06 11:29:23.987483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.242 [2024-12-06 11:29:23.987508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.242 [2024-12-06 11:29:23.987528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d4c000b90 00:27:51.242 [2024-12-06 11:29:23.987577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.242 qpair failed and we were unable to recover it. 00:27:51.242 [2024-12-06 11:29:23.997308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.242 [2024-12-06 11:29:23.997395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.242 [2024-12-06 11:29:23.997423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.242 [2024-12-06 11:29:23.997437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.242 [2024-12-06 11:29:23.997450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d4c000b90 00:27:51.242 [2024-12-06 11:29:23.997481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.242 qpair failed and we were unable to recover it. 00:27:51.242 [2024-12-06 11:29:24.007363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.242 [2024-12-06 11:29:24.007458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.242 [2024-12-06 11:29:24.007514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.242 [2024-12-06 11:29:24.007539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.242 [2024-12-06 11:29:24.007559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc20590 00:27:51.242 [2024-12-06 11:29:24.007611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.242 qpair failed and we were unable to recover it. 00:27:51.242 [2024-12-06 11:29:24.017323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.242 [2024-12-06 11:29:24.017393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.242 [2024-12-06 11:29:24.017423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.242 [2024-12-06 11:29:24.017439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.242 [2024-12-06 11:29:24.017452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc20590 00:27:51.242 [2024-12-06 11:29:24.017483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.242 qpair failed and we were unable to recover it. 00:27:51.242 Controller properly reset. 00:27:51.242 Initializing NVMe Controllers 00:27:51.242 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:51.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:51.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:51.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:51.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:51.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:51.242 Initialization complete. Launching workers. 00:27:51.242 Starting thread on core 1 00:27:51.242 Starting thread on core 2 00:27:51.242 Starting thread on core 3 00:27:51.242 Starting thread on core 0 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:51.242 00:27:51.242 real 0m11.455s 00:27:51.242 user 0m21.691s 00:27:51.242 sys 0m4.699s 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.242 ************************************ 00:27:51.242 END TEST nvmf_target_disconnect_tc2 00:27:51.242 ************************************ 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:51.242 rmmod nvme_tcp 00:27:51.242 rmmod nvme_fabrics 00:27:51.242 rmmod nvme_keyring 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1889048 ']' 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1889048 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1889048 ']' 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1889048 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:51.242 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1889048 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1889048' 00:27:51.502 killing process with pid 1889048 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1889048 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1889048 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.502 11:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.036 11:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.036 00:27:54.036 real 0m20.213s 00:27:54.036 user 0m49.669s 00:27:54.036 sys 0m9.578s 00:27:54.036 11:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.036 11:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:54.036 ************************************ 00:27:54.036 END TEST nvmf_target_disconnect 00:27:54.036 ************************************ 00:27:54.036 11:29:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:54.036 00:27:54.036 real 5m54.833s 00:27:54.036 user 10m42.533s 00:27:54.036 sys 1m58.914s 00:27:54.037 11:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.037 11:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.037 ************************************ 00:27:54.037 END TEST nvmf_host 00:27:54.037 ************************************ 00:27:54.037 11:29:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:54.037 11:29:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:54.037 11:29:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:54.037 11:29:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:54.037 11:29:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.037 11:29:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.037 ************************************ 00:27:54.037 START TEST nvmf_target_core_interrupt_mode 00:27:54.037 ************************************ 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:54.037 * Looking for test storage... 00:27:54.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:54.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.037 --rc genhtml_branch_coverage=1 00:27:54.037 --rc genhtml_function_coverage=1 00:27:54.037 --rc genhtml_legend=1 00:27:54.037 --rc geninfo_all_blocks=1 00:27:54.037 --rc geninfo_unexecuted_blocks=1 00:27:54.037 00:27:54.037 ' 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:54.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.037 --rc genhtml_branch_coverage=1 00:27:54.037 --rc genhtml_function_coverage=1 00:27:54.037 --rc genhtml_legend=1 00:27:54.037 --rc geninfo_all_blocks=1 00:27:54.037 --rc geninfo_unexecuted_blocks=1 00:27:54.037 00:27:54.037 ' 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:54.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.037 --rc genhtml_branch_coverage=1 00:27:54.037 --rc genhtml_function_coverage=1 00:27:54.037 --rc genhtml_legend=1 00:27:54.037 --rc geninfo_all_blocks=1 00:27:54.037 --rc geninfo_unexecuted_blocks=1 00:27:54.037 00:27:54.037 ' 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:54.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.037 --rc genhtml_branch_coverage=1 00:27:54.037 --rc genhtml_function_coverage=1 00:27:54.037 --rc genhtml_legend=1 00:27:54.037 --rc geninfo_all_blocks=1 00:27:54.037 --rc geninfo_unexecuted_blocks=1 00:27:54.037 00:27:54.037 ' 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.037 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:54.038 ************************************ 00:27:54.038 START TEST nvmf_abort 00:27:54.038 ************************************ 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:54.038 * Looking for test storage... 00:27:54.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.038 --rc genhtml_branch_coverage=1 00:27:54.038 --rc genhtml_function_coverage=1 00:27:54.038 --rc genhtml_legend=1 00:27:54.038 --rc geninfo_all_blocks=1 00:27:54.038 --rc geninfo_unexecuted_blocks=1 00:27:54.038 00:27:54.038 ' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.038 --rc genhtml_branch_coverage=1 00:27:54.038 --rc genhtml_function_coverage=1 00:27:54.038 --rc genhtml_legend=1 00:27:54.038 --rc geninfo_all_blocks=1 00:27:54.038 --rc geninfo_unexecuted_blocks=1 00:27:54.038 00:27:54.038 ' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.038 --rc genhtml_branch_coverage=1 00:27:54.038 --rc genhtml_function_coverage=1 00:27:54.038 --rc genhtml_legend=1 00:27:54.038 --rc geninfo_all_blocks=1 00:27:54.038 --rc geninfo_unexecuted_blocks=1 00:27:54.038 00:27:54.038 ' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.038 --rc genhtml_branch_coverage=1 00:27:54.038 --rc genhtml_function_coverage=1 00:27:54.038 --rc genhtml_legend=1 00:27:54.038 --rc geninfo_all_blocks=1 00:27:54.038 --rc geninfo_unexecuted_blocks=1 00:27:54.038 00:27:54.038 ' 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.038 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.298 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:54.299 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:54.299 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:00.872 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:00.872 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:00.872 Found net devices under 0000:af:00.0: cvl_0_0 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:00.872 Found net devices under 0000:af:00.1: cvl_0_1 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:00.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:28:00.872 00:28:00.872 --- 10.0.0.2 ping statistics --- 00:28:00.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.872 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:00.872 00:28:00.872 --- 10.0.0.1 ping statistics --- 00:28:00.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.872 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1893921 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1893921 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1893921 ']' 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.872 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:00.872 [2024-12-06 11:29:32.986671] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:00.872 [2024-12-06 11:29:32.987515] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:28:00.872 [2024-12-06 11:29:32.987545] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.872 [2024-12-06 11:29:33.061392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:00.872 [2024-12-06 11:29:33.100660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.872 [2024-12-06 11:29:33.100694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.872 [2024-12-06 11:29:33.100700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.872 [2024-12-06 11:29:33.100706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.872 [2024-12-06 11:29:33.100710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.873 [2024-12-06 11:29:33.101967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.873 [2024-12-06 11:29:33.102092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.873 [2024-12-06 11:29:33.102105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.873 [2024-12-06 11:29:33.167845] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:00.873 [2024-12-06 11:29:33.168535] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:00.873 [2024-12-06 11:29:33.168678] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:00.873 [2024-12-06 11:29:33.168833] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:00.873 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.873 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:00.873 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:00.873 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:00.873 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.130 [2024-12-06 11:29:33.842876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.130 Malloc0 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.130 Delay0 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.130 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.131 [2024-12-06 11:29:33.926849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.131 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.131 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:01.131 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.131 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.131 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.131 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:01.388 [2024-12-06 11:29:34.096288] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:03.287 Initializing NVMe Controllers 00:28:03.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:03.287 controller IO queue size 128 less than required 00:28:03.287 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:03.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:03.287 Initialization complete. Launching workers. 00:28:03.287 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41099 00:28:03.287 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41156, failed to submit 66 00:28:03.287 success 41099, unsuccessful 57, failed 0 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:03.287 rmmod nvme_tcp 00:28:03.287 rmmod nvme_fabrics 00:28:03.287 rmmod nvme_keyring 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1893921 ']' 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1893921 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1893921 ']' 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1893921 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.287 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1893921 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1893921' 00:28:03.545 killing process with pid 1893921 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1893921 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1893921 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.545 11:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:06.078 00:28:06.078 real 0m11.711s 00:28:06.078 user 0m10.507s 00:28:06.078 sys 0m5.649s 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:06.078 ************************************ 00:28:06.078 END TEST nvmf_abort 00:28:06.078 ************************************ 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:06.078 ************************************ 00:28:06.078 START TEST nvmf_ns_hotplug_stress 00:28:06.078 ************************************ 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:06.078 * Looking for test storage... 00:28:06.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.078 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:06.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.079 --rc genhtml_branch_coverage=1 00:28:06.079 --rc genhtml_function_coverage=1 00:28:06.079 --rc genhtml_legend=1 00:28:06.079 --rc geninfo_all_blocks=1 00:28:06.079 --rc geninfo_unexecuted_blocks=1 00:28:06.079 00:28:06.079 ' 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:06.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.079 --rc genhtml_branch_coverage=1 00:28:06.079 --rc genhtml_function_coverage=1 00:28:06.079 --rc genhtml_legend=1 00:28:06.079 --rc geninfo_all_blocks=1 00:28:06.079 --rc geninfo_unexecuted_blocks=1 00:28:06.079 00:28:06.079 ' 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:06.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.079 --rc genhtml_branch_coverage=1 00:28:06.079 --rc genhtml_function_coverage=1 00:28:06.079 --rc genhtml_legend=1 00:28:06.079 --rc geninfo_all_blocks=1 00:28:06.079 --rc geninfo_unexecuted_blocks=1 00:28:06.079 00:28:06.079 ' 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:06.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.079 --rc genhtml_branch_coverage=1 00:28:06.079 --rc genhtml_function_coverage=1 00:28:06.079 --rc genhtml_legend=1 00:28:06.079 --rc geninfo_all_blocks=1 00:28:06.079 --rc geninfo_unexecuted_blocks=1 00:28:06.079 00:28:06.079 ' 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.079 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.080 11:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:12.650 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:12.650 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:12.650 Found net devices under 0000:af:00.0: cvl_0_0 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.650 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:12.651 Found net devices under 0000:af:00.1: cvl_0_1 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:28:12.651 00:28:12.651 --- 10.0.0.2 ping statistics --- 00:28:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.651 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:28:12.651 00:28:12.651 --- 10.0.0.1 ping statistics --- 00:28:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.651 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1898183 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1898183 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1898183 ']' 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.651 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:12.651 [2024-12-06 11:29:44.841503] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:12.651 [2024-12-06 11:29:44.842367] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:28:12.651 [2024-12-06 11:29:44.842397] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.651 [2024-12-06 11:29:44.916826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:12.651 [2024-12-06 11:29:44.957010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.651 [2024-12-06 11:29:44.957041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.651 [2024-12-06 11:29:44.957047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.651 [2024-12-06 11:29:44.957053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.651 [2024-12-06 11:29:44.957061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.651 [2024-12-06 11:29:44.958284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.651 [2024-12-06 11:29:44.958314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.651 [2024-12-06 11:29:44.958315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.651 [2024-12-06 11:29:45.025255] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:12.651 [2024-12-06 11:29:45.025327] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:12.651 [2024-12-06 11:29:45.025827] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:12.651 [2024-12-06 11:29:45.026045] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:12.910 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.910 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:12.910 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.910 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.910 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:12.910 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.910 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:12.911 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:12.911 [2024-12-06 11:29:45.839017] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.169 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:13.169 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.428 [2024-12-06 11:29:46.183496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.428 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:13.687 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:13.687 Malloc0 00:28:13.687 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:13.946 Delay0 00:28:13.946 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.205 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:14.205 NULL1 00:28:14.205 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:14.464 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1898608 00:28:14.464 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:14.464 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:14.464 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.723 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.982 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:14.982 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:14.982 true 00:28:14.982 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:14.982 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.241 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.499 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:15.499 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:15.759 true 00:28:15.759 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:15.759 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.018 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.018 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:16.018 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:16.276 true 00:28:16.276 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:16.276 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.535 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.794 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:16.794 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:16.794 true 00:28:16.794 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:16.794 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.053 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.311 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:17.311 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:17.570 true 00:28:17.570 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:17.570 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.828 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.829 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:17.829 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:18.087 true 00:28:18.087 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:18.087 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.346 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.605 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:18.605 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:18.605 true 00:28:18.605 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:18.605 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.864 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.123 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:19.123 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:19.382 true 00:28:19.382 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:19.382 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.641 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.641 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:19.641 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:19.899 true 00:28:19.899 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:19.899 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.158 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.417 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:20.417 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:20.417 true 00:28:20.417 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:20.417 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.676 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.935 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:20.935 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:21.194 true 00:28:21.194 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:21.194 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.194 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.452 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:21.452 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:21.711 true 00:28:21.711 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:21.711 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.970 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.229 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:22.229 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:22.229 true 00:28:22.229 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:22.229 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.487 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.745 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:22.745 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:22.745 true 00:28:23.003 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:23.003 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.003 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.262 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:23.262 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:23.521 true 00:28:23.521 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:23.521 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.779 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.779 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:23.779 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:24.036 true 00:28:24.037 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:24.037 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.294 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.552 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:24.552 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:24.552 true 00:28:24.552 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:24.552 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.811 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.069 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:25.069 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:25.328 true 00:28:25.328 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:25.328 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.328 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.587 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:25.587 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:25.846 true 00:28:25.846 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:25.846 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.105 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.363 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:26.363 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:26.363 true 00:28:26.363 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:26.363 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.620 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.878 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:26.878 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:27.134 true 00:28:27.134 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:27.134 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.391 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.391 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:27.391 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:27.649 true 00:28:27.649 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:27.649 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.907 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.164 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:28.164 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:28.164 true 00:28:28.164 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:28.164 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.422 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.679 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:28.679 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:28.937 true 00:28:28.937 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:28.937 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.194 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.194 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:29.194 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:29.452 true 00:28:29.452 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:29.452 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.710 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.967 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:29.967 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:29.967 true 00:28:30.225 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:30.225 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.225 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.483 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:30.483 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:30.741 true 00:28:30.741 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:30.741 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.000 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.000 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:31.000 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:31.259 true 00:28:31.259 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:31.259 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.540 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.799 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:31.799 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:31.799 true 00:28:31.799 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:31.799 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.059 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.319 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:28:32.319 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:28:32.581 true 00:28:32.581 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:32.581 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.581 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.895 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:28:32.895 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:28:33.189 true 00:28:33.189 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:33.189 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.189 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.513 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:28:33.513 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:28:33.771 true 00:28:33.771 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:33.771 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.771 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.030 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:28:34.030 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:28:34.289 true 00:28:34.289 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:34.289 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.548 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.807 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:28:34.807 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:28:34.807 true 00:28:34.807 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:34.807 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.066 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.325 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:28:35.325 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:28:35.584 true 00:28:35.584 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:35.584 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.584 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:28:35.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:28:36.101 true 00:28:36.101 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:36.101 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.360 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.619 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:28:36.619 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:28:36.619 true 00:28:36.619 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:36.619 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.879 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.138 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:28:37.138 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:28:37.138 true 00:28:37.397 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:37.397 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.397 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.656 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:28:37.656 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:28:37.915 true 00:28:37.915 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:37.915 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.173 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.173 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:28:38.173 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:28:38.432 true 00:28:38.432 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:38.432 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.691 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.950 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:28:38.951 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:28:38.951 true 00:28:38.951 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:38.951 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.211 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.470 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:28:39.470 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:28:39.729 true 00:28:39.729 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:39.729 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.988 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.988 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:28:39.988 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:28:40.248 true 00:28:40.248 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:40.248 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.507 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.765 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:28:40.765 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:28:40.765 true 00:28:40.765 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:40.765 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.024 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.283 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:28:41.283 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:28:41.541 true 00:28:41.541 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:41.541 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.800 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.800 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:28:41.800 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:28:42.059 true 00:28:42.059 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:42.059 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.318 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.578 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:28:42.578 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:28:42.578 true 00:28:42.578 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:42.835 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.835 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.093 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:28:43.093 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:28:43.351 true 00:28:43.351 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:43.351 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.609 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.609 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:28:43.609 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:28:43.866 true 00:28:43.866 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:43.866 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.124 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.382 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:28:44.382 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:28:44.382 true 00:28:44.382 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:44.382 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.640 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.899 Initializing NVMe Controllers 00:28:44.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.899 Controller IO queue size 128, less than required. 00:28:44.899 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:44.899 Initialization complete. Launching workers. 00:28:44.899 ======================================================== 00:28:44.899 Latency(us) 00:28:44.899 Device Information : IOPS MiB/s Average min max 00:28:44.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30535.42 14.91 4191.74 1213.14 44299.76 00:28:44.899 ======================================================== 00:28:44.899 Total : 30535.42 14.91 4191.74 1213.14 44299.76 00:28:44.899 00:28:44.899 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:28:44.899 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:28:45.158 true 00:28:45.158 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1898608 00:28:45.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1898608) - No such process 00:28:45.158 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1898608 00:28:45.158 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.158 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:45.416 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:45.416 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:45.416 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:45.416 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:45.417 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:45.675 null0 00:28:45.675 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:45.675 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:45.675 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:45.675 null1 00:28:45.675 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:45.675 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:45.675 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:45.935 null2 00:28:45.935 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:45.935 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:45.935 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:46.193 null3 00:28:46.194 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.194 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.194 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:46.194 null4 00:28:46.453 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.453 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.453 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:46.453 null5 00:28:46.453 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.453 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.453 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:46.713 null6 00:28:46.713 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.713 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.713 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:46.973 null7 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:46.973 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1904888 1904889 1904890 1904891 1904893 1904895 1904898 1904900 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:46.974 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:47.233 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:47.492 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:47.492 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:47.492 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:47.492 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:47.492 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.492 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:47.492 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:47.492 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:47.751 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.011 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.270 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:48.270 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.270 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:48.270 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:48.270 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:48.270 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:48.270 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:48.270 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:48.528 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.528 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:48.529 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.788 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.789 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.048 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:49.308 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.309 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.309 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:49.309 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.568 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.828 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.088 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.347 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.605 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.864 rmmod nvme_tcp 00:28:50.864 rmmod nvme_fabrics 00:28:50.864 rmmod nvme_keyring 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1898183 ']' 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1898183 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1898183 ']' 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1898183 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1898183 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1898183' 00:28:50.864 killing process with pid 1898183 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1898183 00:28:50.864 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1898183 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.122 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.076 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.076 00:28:53.076 real 0m47.326s 00:28:53.076 user 2m58.896s 00:28:53.076 sys 0m21.576s 00:28:53.076 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.076 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:53.076 ************************************ 00:28:53.076 END TEST nvmf_ns_hotplug_stress 00:28:53.076 ************************************ 00:28:53.076 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:53.076 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:53.076 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.076 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:53.076 ************************************ 00:28:53.076 START TEST nvmf_delete_subsystem 00:28:53.076 ************************************ 00:28:53.076 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:53.335 * Looking for test storage... 00:28:53.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:53.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.335 --rc genhtml_branch_coverage=1 00:28:53.335 --rc genhtml_function_coverage=1 00:28:53.335 --rc genhtml_legend=1 00:28:53.335 --rc geninfo_all_blocks=1 00:28:53.335 --rc geninfo_unexecuted_blocks=1 00:28:53.335 00:28:53.335 ' 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:53.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.335 --rc genhtml_branch_coverage=1 00:28:53.335 --rc genhtml_function_coverage=1 00:28:53.335 --rc genhtml_legend=1 00:28:53.335 --rc geninfo_all_blocks=1 00:28:53.335 --rc geninfo_unexecuted_blocks=1 00:28:53.335 00:28:53.335 ' 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:53.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.335 --rc genhtml_branch_coverage=1 00:28:53.335 --rc genhtml_function_coverage=1 00:28:53.335 --rc genhtml_legend=1 00:28:53.335 --rc geninfo_all_blocks=1 00:28:53.335 --rc geninfo_unexecuted_blocks=1 00:28:53.335 00:28:53.335 ' 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:53.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.335 --rc genhtml_branch_coverage=1 00:28:53.335 --rc genhtml_function_coverage=1 00:28:53.335 --rc genhtml_legend=1 00:28:53.335 --rc geninfo_all_blocks=1 00:28:53.335 --rc geninfo_unexecuted_blocks=1 00:28:53.335 00:28:53.335 ' 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.335 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.336 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:59.915 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:59.915 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:59.915 Found net devices under 0000:af:00.0: cvl_0_0 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:59.915 Found net devices under 0000:af:00.1: cvl_0_1 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.915 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.916 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:28:59.916 00:28:59.916 --- 10.0.0.2 ping statistics --- 00:28:59.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.916 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:28:59.916 00:28:59.916 --- 10.0.0.1 ping statistics --- 00:28:59.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.916 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1909300 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1909300 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1909300 ']' 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.916 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:59.916 [2024-12-06 11:30:32.177609] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:59.916 [2024-12-06 11:30:32.178441] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:28:59.916 [2024-12-06 11:30:32.178472] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.916 [2024-12-06 11:30:32.256357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:59.916 [2024-12-06 11:30:32.297839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.916 [2024-12-06 11:30:32.297869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.916 [2024-12-06 11:30:32.297878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.916 [2024-12-06 11:30:32.297884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.916 [2024-12-06 11:30:32.297889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.916 [2024-12-06 11:30:32.299243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.916 [2024-12-06 11:30:32.299243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.916 [2024-12-06 11:30:32.366504] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:59.916 [2024-12-06 11:30:32.366697] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:59.916 [2024-12-06 11:30:32.368229] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:00.175 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.175 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:00.175 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.175 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.175 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:00.175 [2024-12-06 11:30:33.035861] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:00.175 [2024-12-06 11:30:33.064257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:00.175 NULL1 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:00.175 Delay0 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1909580 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:00.175 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:00.433 [2024-12-06 11:30:33.175021] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:02.341 11:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:02.341 11:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.341 11:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 starting I/O failed: -6 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 [2024-12-06 11:30:35.387475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9a0e0 is same with the state(6) to be set 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.600 Write completed with error (sct=0, sc=8) 00:29:02.600 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 starting I/O failed: -6 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 [2024-12-06 11:30:35.388272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cfc000c40 is same with the state(6) to be set 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Read completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:02.601 Write completed with error (sct=0, sc=8) 00:29:03.539 [2024-12-06 11:30:36.352024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9b5f0 is same with the state(6) to be set 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 [2024-12-06 11:30:36.391465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9a2c0 is same with the state(6) to be set 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 [2024-12-06 11:30:36.391568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a99f00 is same with the state(6) to be set 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 [2024-12-06 11:30:36.392139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cfc00d800 is same with the state(6) to be set 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 Write completed with error (sct=0, sc=8) 00:29:03.539 Read completed with error (sct=0, sc=8) 00:29:03.539 [2024-12-06 11:30:36.392598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cfc00d020 is same with the state(6) to be set 00:29:03.539 Initializing NVMe Controllers 00:29:03.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.539 Controller IO queue size 128, less than required. 00:29:03.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:03.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:03.539 Initialization complete. Launching workers. 00:29:03.539 ======================================================== 00:29:03.539 Latency(us) 00:29:03.539 Device Information : IOPS MiB/s Average min max 00:29:03.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.47 0.08 905508.33 266.53 1010476.27 00:29:03.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.05 0.07 936930.68 220.28 1011052.56 00:29:03.539 ======================================================== 00:29:03.539 Total : 318.52 0.16 920606.75 220.28 1011052.56 00:29:03.539 00:29:03.539 [2024-12-06 11:30:36.393138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9b5f0 (9): Bad file descriptor 00:29:03.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:03.539 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.539 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:03.539 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1909580 00:29:03.539 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1909580 00:29:04.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1909580) - No such process 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1909580 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1909580 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1909580 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.108 [2024-12-06 11:30:36.924256] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1910117 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1910117 00:29:04.108 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:04.108 [2024-12-06 11:30:37.007674] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:04.676 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:04.676 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1910117 00:29:04.676 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:05.243 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:05.243 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1910117 00:29:05.243 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:05.810 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:05.810 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1910117 00:29:05.810 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.069 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.069 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1910117 00:29:06.069 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.635 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.635 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1910117 00:29:06.635 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.200 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:07.200 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1910117 00:29:07.200 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.459 Initializing NVMe Controllers 00:29:07.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.459 Controller IO queue size 128, less than required. 00:29:07.459 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:07.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:07.459 Initialization complete. Launching workers. 00:29:07.459 ======================================================== 00:29:07.459 Latency(us) 00:29:07.459 Device Information : IOPS MiB/s Average min max 00:29:07.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002037.28 1000176.34 1005348.53 00:29:07.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003830.20 1000253.82 1041315.78 00:29:07.459 ======================================================== 00:29:07.459 Total : 256.00 0.12 1002933.74 1000176.34 1041315.78 00:29:07.459 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1910117 00:29:07.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1910117) - No such process 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1910117 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.718 rmmod nvme_tcp 00:29:07.718 rmmod nvme_fabrics 00:29:07.718 rmmod nvme_keyring 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1909300 ']' 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1909300 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1909300 ']' 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1909300 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1909300 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1909300' 00:29:07.718 killing process with pid 1909300 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1909300 00:29:07.718 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1909300 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.978 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.518 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.518 00:29:10.518 real 0m16.859s 00:29:10.518 user 0m26.462s 00:29:10.518 sys 0m6.222s 00:29:10.518 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.518 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.518 ************************************ 00:29:10.518 END TEST nvmf_delete_subsystem 00:29:10.518 ************************************ 00:29:10.518 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:10.518 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:10.518 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.518 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:10.518 ************************************ 00:29:10.518 START TEST nvmf_host_management 00:29:10.518 ************************************ 00:29:10.518 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:10.518 * Looking for test storage... 00:29:10.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:10.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.518 --rc genhtml_branch_coverage=1 00:29:10.518 --rc genhtml_function_coverage=1 00:29:10.518 --rc genhtml_legend=1 00:29:10.518 --rc geninfo_all_blocks=1 00:29:10.518 --rc geninfo_unexecuted_blocks=1 00:29:10.518 00:29:10.518 ' 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:10.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.518 --rc genhtml_branch_coverage=1 00:29:10.518 --rc genhtml_function_coverage=1 00:29:10.518 --rc genhtml_legend=1 00:29:10.518 --rc geninfo_all_blocks=1 00:29:10.518 --rc geninfo_unexecuted_blocks=1 00:29:10.518 00:29:10.518 ' 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:10.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.518 --rc genhtml_branch_coverage=1 00:29:10.518 --rc genhtml_function_coverage=1 00:29:10.518 --rc genhtml_legend=1 00:29:10.518 --rc geninfo_all_blocks=1 00:29:10.518 --rc geninfo_unexecuted_blocks=1 00:29:10.518 00:29:10.518 ' 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:10.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.518 --rc genhtml_branch_coverage=1 00:29:10.518 --rc genhtml_function_coverage=1 00:29:10.518 --rc genhtml_legend=1 00:29:10.518 --rc geninfo_all_blocks=1 00:29:10.518 --rc geninfo_unexecuted_blocks=1 00:29:10.518 00:29:10.518 ' 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.518 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.519 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.089 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.089 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.089 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.089 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:17.090 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:17.090 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:17.090 Found net devices under 0000:af:00.0: cvl_0_0 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:17.090 Found net devices under 0000:af:00.1: cvl_0_1 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.090 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.091 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.091 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.091 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.091 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.091 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.091 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.091 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.091 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.091 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:29:17.091 00:29:17.091 --- 10.0.0.2 ping statistics --- 00:29:17.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.091 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:29:17.091 00:29:17.091 --- 10.0.0.1 ping statistics --- 00:29:17.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.091 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1914378 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1914378 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1914378 ']' 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.091 [2024-12-06 11:30:49.117366] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:17.091 [2024-12-06 11:30:49.118257] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:29:17.091 [2024-12-06 11:30:49.118290] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.091 [2024-12-06 11:30:49.192177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.091 [2024-12-06 11:30:49.231479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.091 [2024-12-06 11:30:49.231514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.091 [2024-12-06 11:30:49.231520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.091 [2024-12-06 11:30:49.231526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.091 [2024-12-06 11:30:49.231530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.091 [2024-12-06 11:30:49.232972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.091 [2024-12-06 11:30:49.233100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.091 [2024-12-06 11:30:49.233120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:17.091 [2024-12-06 11:30:49.233121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.091 [2024-12-06 11:30:49.299277] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:17.091 [2024-12-06 11:30:49.300271] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:17.091 [2024-12-06 11:30:49.300280] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:17.091 [2024-12-06 11:30:49.300286] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:17.091 [2024-12-06 11:30:49.300384] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.091 11:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.091 [2024-12-06 11:30:49.977935] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.091 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.091 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:17.091 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.091 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.091 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:17.091 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:17.091 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:17.091 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.091 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.350 Malloc0 00:29:17.350 [2024-12-06 11:30:50.066099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1914674 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1914674 /var/tmp/bdevperf.sock 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1914674 ']' 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.350 { 00:29:17.350 "params": { 00:29:17.350 "name": "Nvme$subsystem", 00:29:17.350 "trtype": "$TEST_TRANSPORT", 00:29:17.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.350 "adrfam": "ipv4", 00:29:17.350 "trsvcid": "$NVMF_PORT", 00:29:17.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.350 "hdgst": ${hdgst:-false}, 00:29:17.350 "ddgst": ${ddgst:-false} 00:29:17.350 }, 00:29:17.350 "method": "bdev_nvme_attach_controller" 00:29:17.350 } 00:29:17.350 EOF 00:29:17.350 )") 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:17.350 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:17.350 "params": { 00:29:17.350 "name": "Nvme0", 00:29:17.350 "trtype": "tcp", 00:29:17.350 "traddr": "10.0.0.2", 00:29:17.350 "adrfam": "ipv4", 00:29:17.350 "trsvcid": "4420", 00:29:17.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:17.350 "hdgst": false, 00:29:17.350 "ddgst": false 00:29:17.350 }, 00:29:17.350 "method": "bdev_nvme_attach_controller" 00:29:17.350 }' 00:29:17.350 [2024-12-06 11:30:50.160594] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:29:17.350 [2024-12-06 11:30:50.160636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914674 ] 00:29:17.350 [2024-12-06 11:30:50.231443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.350 [2024-12-06 11:30:50.269834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.607 Running I/O for 10 seconds... 00:29:18.177 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.177 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:18.177 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:18.177 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.177 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1155 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1155 -ge 100 ']' 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.177 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.177 [2024-12-06 11:30:51.049655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.177 [2024-12-06 11:30:51.049786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.049998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4c50 is same with the state(6) to be set 00:29:18.178 [2024-12-06 11:30:51.050166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.178 [2024-12-06 11:30:51.050575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.178 [2024-12-06 11:30:51.050582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.050988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.050994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.051001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.051007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.051015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.051021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.051029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.051035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.051042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.051048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.051055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.179 [2024-12-06 11:30:51.051067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.051074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1d5b0 is same with the state(6) to be set 00:29:18.179 [2024-12-06 11:30:51.051950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:18.179 task offset: 24576 on job bdev=Nvme0n1 fails 00:29:18.179 00:29:18.179 Latency(us) 00:29:18.179 [2024-12-06T10:30:51.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.179 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.179 Job: Nvme0n1 ended in about 0.59 seconds with error 00:29:18.179 Verification LBA range: start 0x0 length 0x400 00:29:18.179 Nvme0n1 : 0.59 2077.14 129.82 109.32 0.00 28702.89 5093.93 25141.99 00:29:18.179 [2024-12-06T10:30:51.117Z] =================================================================================================================== 00:29:18.179 [2024-12-06T10:30:51.117Z] Total : 2077.14 129.82 109.32 0.00 28702.89 5093.93 25141.99 00:29:18.179 [2024-12-06 11:30:51.054184] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:18.179 [2024-12-06 11:30:51.054204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc04630 (9): Bad file descriptor 00:29:18.179 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.179 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:18.179 [2024-12-06 11:30:51.055161] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:18.179 [2024-12-06 11:30:51.055229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:18.179 [2024-12-06 11:30:51.055251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.179 [2024-12-06 11:30:51.055262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:18.179 [2024-12-06 11:30:51.055269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:18.179 [2024-12-06 11:30:51.055277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.179 [2024-12-06 11:30:51.055284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc04630 00:29:18.179 [2024-12-06 11:30:51.055301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc04630 (9): Bad file descriptor 00:29:18.179 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.179 [2024-12-06 11:30:51.055312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:18.179 [2024-12-06 11:30:51.055319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:18.179 [2024-12-06 11:30:51.055326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:18.179 [2024-12-06 11:30:51.055334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:18.179 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.179 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.179 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1914674 00:29:19.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1914674) - No such process 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.566 { 00:29:19.566 "params": { 00:29:19.566 "name": "Nvme$subsystem", 00:29:19.566 "trtype": "$TEST_TRANSPORT", 00:29:19.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.566 "adrfam": "ipv4", 00:29:19.566 "trsvcid": "$NVMF_PORT", 00:29:19.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.566 "hdgst": ${hdgst:-false}, 00:29:19.566 "ddgst": ${ddgst:-false} 00:29:19.566 }, 00:29:19.566 "method": "bdev_nvme_attach_controller" 00:29:19.566 } 00:29:19.566 EOF 00:29:19.566 )") 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:19.566 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:19.566 "params": { 00:29:19.566 "name": "Nvme0", 00:29:19.566 "trtype": "tcp", 00:29:19.566 "traddr": "10.0.0.2", 00:29:19.566 "adrfam": "ipv4", 00:29:19.566 "trsvcid": "4420", 00:29:19.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:19.566 "hdgst": false, 00:29:19.566 "ddgst": false 00:29:19.566 }, 00:29:19.566 "method": "bdev_nvme_attach_controller" 00:29:19.566 }' 00:29:19.566 [2024-12-06 11:30:52.123202] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:29:19.566 [2024-12-06 11:30:52.123247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914959 ] 00:29:19.566 [2024-12-06 11:30:52.196257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.566 [2024-12-06 11:30:52.232650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.566 Running I/O for 1 seconds... 00:29:20.633 2176.00 IOPS, 136.00 MiB/s 00:29:20.633 Latency(us) 00:29:20.633 [2024-12-06T10:30:53.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.633 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.633 Verification LBA range: start 0x0 length 0x400 00:29:20.633 Nvme0n1 : 1.01 2209.03 138.06 0.00 0.00 28497.61 2368.23 25141.99 00:29:20.633 [2024-12-06T10:30:53.571Z] =================================================================================================================== 00:29:20.633 [2024-12-06T10:30:53.571Z] Total : 2209.03 138.06 0.00 0.00 28497.61 2368.23 25141.99 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.918 rmmod nvme_tcp 00:29:20.918 rmmod nvme_fabrics 00:29:20.918 rmmod nvme_keyring 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1914378 ']' 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1914378 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1914378 ']' 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1914378 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1914378 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1914378' 00:29:20.918 killing process with pid 1914378 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1914378 00:29:20.918 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1914378 00:29:21.177 [2024-12-06 11:30:53.900141] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.177 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.081 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.081 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:23.081 00:29:23.081 real 0m13.087s 00:29:23.081 user 0m18.674s 00:29:23.081 sys 0m6.393s 00:29:23.081 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.081 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:23.081 ************************************ 00:29:23.081 END TEST nvmf_host_management 00:29:23.081 ************************************ 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:23.341 ************************************ 00:29:23.341 START TEST nvmf_lvol 00:29:23.341 ************************************ 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:23.341 * Looking for test storage... 00:29:23.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.341 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:23.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.341 --rc genhtml_branch_coverage=1 00:29:23.341 --rc genhtml_function_coverage=1 00:29:23.341 --rc genhtml_legend=1 00:29:23.341 --rc geninfo_all_blocks=1 00:29:23.341 --rc geninfo_unexecuted_blocks=1 00:29:23.342 00:29:23.342 ' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:23.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.342 --rc genhtml_branch_coverage=1 00:29:23.342 --rc genhtml_function_coverage=1 00:29:23.342 --rc genhtml_legend=1 00:29:23.342 --rc geninfo_all_blocks=1 00:29:23.342 --rc geninfo_unexecuted_blocks=1 00:29:23.342 00:29:23.342 ' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:23.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.342 --rc genhtml_branch_coverage=1 00:29:23.342 --rc genhtml_function_coverage=1 00:29:23.342 --rc genhtml_legend=1 00:29:23.342 --rc geninfo_all_blocks=1 00:29:23.342 --rc geninfo_unexecuted_blocks=1 00:29:23.342 00:29:23.342 ' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:23.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.342 --rc genhtml_branch_coverage=1 00:29:23.342 --rc genhtml_function_coverage=1 00:29:23.342 --rc genhtml_legend=1 00:29:23.342 --rc geninfo_all_blocks=1 00:29:23.342 --rc geninfo_unexecuted_blocks=1 00:29:23.342 00:29:23.342 ' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.342 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.602 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:23.602 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:23.602 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.602 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.172 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:30.173 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:30.173 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:30.173 Found net devices under 0000:af:00.0: cvl_0_0 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:30.173 Found net devices under 0000:af:00.1: cvl_0_1 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.173 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:29:30.173 00:29:30.173 --- 10.0.0.2 ping statistics --- 00:29:30.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.173 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:29:30.173 00:29:30.173 --- 10.0.0.1 ping statistics --- 00:29:30.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.173 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1918956 00:29:30.173 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1918956 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1918956 ']' 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.174 [2024-12-06 11:31:02.295307] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:30.174 [2024-12-06 11:31:02.296106] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:29:30.174 [2024-12-06 11:31:02.296133] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.174 [2024-12-06 11:31:02.355408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.174 [2024-12-06 11:31:02.394724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.174 [2024-12-06 11:31:02.394756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.174 [2024-12-06 11:31:02.394763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.174 [2024-12-06 11:31:02.394769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.174 [2024-12-06 11:31:02.394774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.174 [2024-12-06 11:31:02.398080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.174 [2024-12-06 11:31:02.398113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.174 [2024-12-06 11:31:02.398114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.174 [2024-12-06 11:31:02.464229] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:30.174 [2024-12-06 11:31:02.464502] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:30.174 [2024-12-06 11:31:02.464772] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:30.174 [2024-12-06 11:31:02.464995] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:30.174 [2024-12-06 11:31:02.690756] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:30.174 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:30.433 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:30.433 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:30.433 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:30.691 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cc189c4e-13d5-4d27-b48c-376332c2b6ec 00:29:30.691 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cc189c4e-13d5-4d27-b48c-376332c2b6ec lvol 20 00:29:30.949 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=afc8923c-0ec6-4158-bbe8-0f171a0520ed 00:29:30.949 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:30.949 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 afc8923c-0ec6-4158-bbe8-0f171a0520ed 00:29:31.208 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:31.465 [2024-12-06 11:31:04.194666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.465 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.723 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1919260 00:29:31.723 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:31.723 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:32.659 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot afc8923c-0ec6-4158-bbe8-0f171a0520ed MY_SNAPSHOT 00:29:32.917 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=452effa3-a39d-40b0-9eb8-530003e8b7fa 00:29:32.917 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize afc8923c-0ec6-4158-bbe8-0f171a0520ed 30 00:29:33.176 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 452effa3-a39d-40b0-9eb8-530003e8b7fa MY_CLONE 00:29:33.176 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a38ba41d-89f7-4057-a6db-8dad0768123c 00:29:33.176 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a38ba41d-89f7-4057-a6db-8dad0768123c 00:29:33.743 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1919260 00:29:43.716 Initializing NVMe Controllers 00:29:43.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:43.716 Controller IO queue size 128, less than required. 00:29:43.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:43.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:43.716 Initialization complete. Launching workers. 00:29:43.716 ======================================================== 00:29:43.716 Latency(us) 00:29:43.716 Device Information : IOPS MiB/s Average min max 00:29:43.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13349.55 52.15 9589.21 1456.90 54640.36 00:29:43.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13523.44 52.83 9465.63 1856.92 67653.96 00:29:43.716 ======================================================== 00:29:43.716 Total : 26872.99 104.97 9527.02 1456.90 67653.96 00:29:43.716 00:29:43.716 11:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete afc8923c-0ec6-4158-bbe8-0f171a0520ed 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc189c4e-13d5-4d27-b48c-376332c2b6ec 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.716 rmmod nvme_tcp 00:29:43.716 rmmod nvme_fabrics 00:29:43.716 rmmod nvme_keyring 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1918956 ']' 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1918956 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1918956 ']' 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1918956 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1918956 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1918956' 00:29:43.716 killing process with pid 1918956 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1918956 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1918956 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.716 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.092 00:29:45.092 real 0m21.740s 00:29:45.092 user 0m55.301s 00:29:45.092 sys 0m9.839s 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:45.092 ************************************ 00:29:45.092 END TEST nvmf_lvol 00:29:45.092 ************************************ 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:45.092 ************************************ 00:29:45.092 START TEST nvmf_lvs_grow 00:29:45.092 ************************************ 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:45.092 * Looking for test storage... 00:29:45.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:45.092 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:45.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.351 --rc genhtml_branch_coverage=1 00:29:45.351 --rc genhtml_function_coverage=1 00:29:45.351 --rc genhtml_legend=1 00:29:45.351 --rc geninfo_all_blocks=1 00:29:45.351 --rc geninfo_unexecuted_blocks=1 00:29:45.351 00:29:45.351 ' 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:45.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.351 --rc genhtml_branch_coverage=1 00:29:45.351 --rc genhtml_function_coverage=1 00:29:45.351 --rc genhtml_legend=1 00:29:45.351 --rc geninfo_all_blocks=1 00:29:45.351 --rc geninfo_unexecuted_blocks=1 00:29:45.351 00:29:45.351 ' 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:45.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.351 --rc genhtml_branch_coverage=1 00:29:45.351 --rc genhtml_function_coverage=1 00:29:45.351 --rc genhtml_legend=1 00:29:45.351 --rc geninfo_all_blocks=1 00:29:45.351 --rc geninfo_unexecuted_blocks=1 00:29:45.351 00:29:45.351 ' 00:29:45.351 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:45.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.351 --rc genhtml_branch_coverage=1 00:29:45.351 --rc genhtml_function_coverage=1 00:29:45.351 --rc genhtml_legend=1 00:29:45.351 --rc geninfo_all_blocks=1 00:29:45.352 --rc geninfo_unexecuted_blocks=1 00:29:45.352 00:29:45.352 ' 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.352 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:51.916 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.916 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:51.917 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:51.917 Found net devices under 0000:af:00.0: cvl_0_0 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:51.917 Found net devices under 0000:af:00.1: cvl_0_1 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:29:51.917 00:29:51.917 --- 10.0.0.2 ping statistics --- 00:29:51.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.917 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:29:51.917 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:29:51.917 00:29:51.917 --- 10.0.0.1 ping statistics --- 00:29:51.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.917 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1924820 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1924820 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1924820 ']' 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.917 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:51.918 [2024-12-06 11:31:24.096629] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:51.918 [2024-12-06 11:31:24.097518] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:29:51.918 [2024-12-06 11:31:24.097551] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.918 [2024-12-06 11:31:24.155796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.918 [2024-12-06 11:31:24.194978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.918 [2024-12-06 11:31:24.195009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.918 [2024-12-06 11:31:24.195016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.918 [2024-12-06 11:31:24.195022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.918 [2024-12-06 11:31:24.195026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.918 [2024-12-06 11:31:24.195550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.918 [2024-12-06 11:31:24.261124] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:51.918 [2024-12-06 11:31:24.261317] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:51.918 [2024-12-06 11:31:24.480186] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:51.918 ************************************ 00:29:51.918 START TEST lvs_grow_clean 00:29:51.918 ************************************ 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:51.918 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:52.176 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:29:52.176 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:29:52.176 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:52.433 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:52.433 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:52.433 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc lvol 150 00:29:52.433 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=216bc475-575a-4bb7-8955-d723c82bfe4e 00:29:52.433 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:52.433 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:52.690 [2024-12-06 11:31:25.479918] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:52.690 [2024-12-06 11:31:25.480043] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:52.690 true 00:29:52.690 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:29:52.690 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:52.948 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:52.948 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:52.948 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 216bc475-575a-4bb7-8955-d723c82bfe4e 00:29:53.205 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:53.463 [2024-12-06 11:31:26.188506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1925231 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1925231 /var/tmp/bdevperf.sock 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1925231 ']' 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:53.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.463 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:53.720 [2024-12-06 11:31:26.425779] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:29:53.720 [2024-12-06 11:31:26.425827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925231 ] 00:29:53.720 [2024-12-06 11:31:26.500364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.720 [2024-12-06 11:31:26.539549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.654 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.654 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:54.654 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:54.654 Nvme0n1 00:29:54.654 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:54.913 [ 00:29:54.913 { 00:29:54.913 "name": "Nvme0n1", 00:29:54.913 "aliases": [ 00:29:54.913 "216bc475-575a-4bb7-8955-d723c82bfe4e" 00:29:54.913 ], 00:29:54.913 "product_name": "NVMe disk", 00:29:54.913 "block_size": 4096, 00:29:54.913 "num_blocks": 38912, 00:29:54.913 "uuid": "216bc475-575a-4bb7-8955-d723c82bfe4e", 00:29:54.913 "numa_id": 1, 00:29:54.913 "assigned_rate_limits": { 00:29:54.913 "rw_ios_per_sec": 0, 00:29:54.913 "rw_mbytes_per_sec": 0, 00:29:54.913 "r_mbytes_per_sec": 0, 00:29:54.913 "w_mbytes_per_sec": 0 00:29:54.913 }, 00:29:54.913 "claimed": false, 00:29:54.913 "zoned": false, 00:29:54.913 "supported_io_types": { 00:29:54.913 "read": true, 00:29:54.913 "write": true, 00:29:54.913 "unmap": true, 00:29:54.913 "flush": true, 00:29:54.913 "reset": true, 00:29:54.913 "nvme_admin": true, 00:29:54.913 "nvme_io": true, 00:29:54.913 "nvme_io_md": false, 00:29:54.913 "write_zeroes": true, 00:29:54.913 "zcopy": false, 00:29:54.913 "get_zone_info": false, 00:29:54.913 "zone_management": false, 00:29:54.913 "zone_append": false, 00:29:54.913 "compare": true, 00:29:54.913 "compare_and_write": true, 00:29:54.913 "abort": true, 00:29:54.913 "seek_hole": false, 00:29:54.913 "seek_data": false, 00:29:54.913 "copy": true, 00:29:54.913 "nvme_iov_md": false 00:29:54.913 }, 00:29:54.913 "memory_domains": [ 00:29:54.913 { 00:29:54.913 "dma_device_id": "system", 00:29:54.913 "dma_device_type": 1 00:29:54.913 } 00:29:54.913 ], 00:29:54.913 "driver_specific": { 00:29:54.913 "nvme": [ 00:29:54.913 { 00:29:54.913 "trid": { 00:29:54.913 "trtype": "TCP", 00:29:54.913 "adrfam": "IPv4", 00:29:54.913 "traddr": "10.0.0.2", 00:29:54.913 "trsvcid": "4420", 00:29:54.913 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:54.913 }, 00:29:54.913 "ctrlr_data": { 00:29:54.913 "cntlid": 1, 00:29:54.913 "vendor_id": "0x8086", 00:29:54.913 "model_number": "SPDK bdev Controller", 00:29:54.913 "serial_number": "SPDK0", 00:29:54.913 "firmware_revision": "25.01", 00:29:54.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:54.913 "oacs": { 00:29:54.913 "security": 0, 00:29:54.913 "format": 0, 00:29:54.913 "firmware": 0, 00:29:54.913 "ns_manage": 0 00:29:54.913 }, 00:29:54.913 "multi_ctrlr": true, 00:29:54.913 "ana_reporting": false 00:29:54.913 }, 00:29:54.913 "vs": { 00:29:54.913 "nvme_version": "1.3" 00:29:54.913 }, 00:29:54.913 "ns_data": { 00:29:54.913 "id": 1, 00:29:54.913 "can_share": true 00:29:54.913 } 00:29:54.913 } 00:29:54.913 ], 00:29:54.913 "mp_policy": "active_passive" 00:29:54.913 } 00:29:54.913 } 00:29:54.913 ] 00:29:54.913 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1925401 00:29:54.913 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:54.913 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:54.913 Running I/O for 10 seconds... 00:29:55.850 Latency(us) 00:29:55.850 [2024-12-06T10:31:28.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.850 Nvme0n1 : 1.00 24638.00 96.24 0.00 0.00 0.00 0.00 0.00 00:29:55.850 [2024-12-06T10:31:28.788Z] =================================================================================================================== 00:29:55.850 [2024-12-06T10:31:28.788Z] Total : 24638.00 96.24 0.00 0.00 0.00 0.00 0.00 00:29:55.850 00:29:56.785 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:29:57.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.044 Nvme0n1 : 2.00 24972.50 97.55 0.00 0.00 0.00 0.00 0.00 00:29:57.044 [2024-12-06T10:31:29.982Z] =================================================================================================================== 00:29:57.044 [2024-12-06T10:31:29.982Z] Total : 24972.50 97.55 0.00 0.00 0.00 0.00 0.00 00:29:57.044 00:29:57.044 true 00:29:57.044 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:29:57.044 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:57.303 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:57.303 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:57.303 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1925401 00:29:57.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.869 Nvme0n1 : 3.00 24988.00 97.61 0.00 0.00 0.00 0.00 0.00 00:29:57.869 [2024-12-06T10:31:30.807Z] =================================================================================================================== 00:29:57.869 [2024-12-06T10:31:30.808Z] Total : 24988.00 97.61 0.00 0.00 0.00 0.00 0.00 00:29:57.870 00:29:59.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.248 Nvme0n1 : 4.00 25095.25 98.03 0.00 0.00 0.00 0.00 0.00 00:29:59.248 [2024-12-06T10:31:32.186Z] =================================================================================================================== 00:29:59.248 [2024-12-06T10:31:32.186Z] Total : 25095.25 98.03 0.00 0.00 0.00 0.00 0.00 00:29:59.248 00:30:00.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.185 Nvme0n1 : 5.00 25181.60 98.37 0.00 0.00 0.00 0.00 0.00 00:30:00.185 [2024-12-06T10:31:33.123Z] =================================================================================================================== 00:30:00.185 [2024-12-06T10:31:33.123Z] Total : 25181.60 98.37 0.00 0.00 0.00 0.00 0.00 00:30:00.185 00:30:01.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.121 Nvme0n1 : 6.00 25239.17 98.59 0.00 0.00 0.00 0.00 0.00 00:30:01.121 [2024-12-06T10:31:34.059Z] =================================================================================================================== 00:30:01.121 [2024-12-06T10:31:34.059Z] Total : 25239.17 98.59 0.00 0.00 0.00 0.00 0.00 00:30:01.121 00:30:02.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.054 Nvme0n1 : 7.00 25298.43 98.82 0.00 0.00 0.00 0.00 0.00 00:30:02.054 [2024-12-06T10:31:34.992Z] =================================================================================================================== 00:30:02.054 [2024-12-06T10:31:34.992Z] Total : 25298.43 98.82 0.00 0.00 0.00 0.00 0.00 00:30:02.054 00:30:02.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.988 Nvme0n1 : 8.00 25327.00 98.93 0.00 0.00 0.00 0.00 0.00 00:30:02.988 [2024-12-06T10:31:35.926Z] =================================================================================================================== 00:30:02.988 [2024-12-06T10:31:35.926Z] Total : 25327.00 98.93 0.00 0.00 0.00 0.00 0.00 00:30:02.988 00:30:03.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:03.922 Nvme0n1 : 9.00 25353.00 99.04 0.00 0.00 0.00 0.00 0.00 00:30:03.922 [2024-12-06T10:31:36.860Z] =================================================================================================================== 00:30:03.922 [2024-12-06T10:31:36.860Z] Total : 25353.00 99.04 0.00 0.00 0.00 0.00 0.00 00:30:03.922 00:30:04.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.855 Nvme0n1 : 10.00 25372.10 99.11 0.00 0.00 0.00 0.00 0.00 00:30:04.855 [2024-12-06T10:31:37.793Z] =================================================================================================================== 00:30:04.855 [2024-12-06T10:31:37.793Z] Total : 25372.10 99.11 0.00 0.00 0.00 0.00 0.00 00:30:04.855 00:30:04.855 00:30:04.855 Latency(us) 00:30:04.855 [2024-12-06T10:31:37.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.855 Nvme0n1 : 10.00 25375.05 99.12 0.00 0.00 5041.61 1787.35 25141.99 00:30:04.855 [2024-12-06T10:31:37.793Z] =================================================================================================================== 00:30:04.855 [2024-12-06T10:31:37.793Z] Total : 25375.05 99.12 0.00 0.00 5041.61 1787.35 25141.99 00:30:04.855 { 00:30:04.855 "results": [ 00:30:04.855 { 00:30:04.855 "job": "Nvme0n1", 00:30:04.855 "core_mask": "0x2", 00:30:04.855 "workload": "randwrite", 00:30:04.855 "status": "finished", 00:30:04.855 "queue_depth": 128, 00:30:04.855 "io_size": 4096, 00:30:04.855 "runtime": 10.003211, 00:30:04.855 "iops": 25375.05207078007, 00:30:04.855 "mibps": 99.12129715148465, 00:30:04.855 "io_failed": 0, 00:30:04.855 "io_timeout": 0, 00:30:04.855 "avg_latency_us": 5041.612725037891, 00:30:04.855 "min_latency_us": 1787.3454545454545, 00:30:04.855 "max_latency_us": 25141.992727272725 00:30:04.855 } 00:30:04.855 ], 00:30:04.855 "core_count": 1 00:30:04.855 } 00:30:04.855 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1925231 00:30:04.855 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1925231 ']' 00:30:05.115 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1925231 00:30:05.115 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:05.115 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.115 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1925231 00:30:05.115 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:05.115 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:05.115 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1925231' 00:30:05.115 killing process with pid 1925231 00:30:05.115 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1925231 00:30:05.115 Received shutdown signal, test time was about 10.000000 seconds 00:30:05.115 00:30:05.115 Latency(us) 00:30:05.115 [2024-12-06T10:31:38.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.115 [2024-12-06T10:31:38.053Z] =================================================================================================================== 00:30:05.115 [2024-12-06T10:31:38.053Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.115 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1925231 00:30:05.115 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.374 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:05.634 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:30:05.634 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:05.894 [2024-12-06 11:31:38.743980] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:05.894 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:30:06.154 request: 00:30:06.154 { 00:30:06.154 "uuid": "7807b8db-1043-4a4b-bc36-a05126a0bdbc", 00:30:06.154 "method": "bdev_lvol_get_lvstores", 00:30:06.154 "req_id": 1 00:30:06.154 } 00:30:06.154 Got JSON-RPC error response 00:30:06.154 response: 00:30:06.154 { 00:30:06.154 "code": -19, 00:30:06.154 "message": "No such device" 00:30:06.154 } 00:30:06.154 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:06.154 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:06.154 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:06.154 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:06.154 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:06.415 aio_bdev 00:30:06.415 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 216bc475-575a-4bb7-8955-d723c82bfe4e 00:30:06.415 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=216bc475-575a-4bb7-8955-d723c82bfe4e 00:30:06.415 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:06.415 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:06.415 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:06.415 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:06.415 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:06.674 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 216bc475-575a-4bb7-8955-d723c82bfe4e -t 2000 00:30:06.674 [ 00:30:06.674 { 00:30:06.674 "name": "216bc475-575a-4bb7-8955-d723c82bfe4e", 00:30:06.674 "aliases": [ 00:30:06.674 "lvs/lvol" 00:30:06.674 ], 00:30:06.674 "product_name": "Logical Volume", 00:30:06.674 "block_size": 4096, 00:30:06.674 "num_blocks": 38912, 00:30:06.674 "uuid": "216bc475-575a-4bb7-8955-d723c82bfe4e", 00:30:06.674 "assigned_rate_limits": { 00:30:06.674 "rw_ios_per_sec": 0, 00:30:06.674 "rw_mbytes_per_sec": 0, 00:30:06.674 "r_mbytes_per_sec": 0, 00:30:06.674 "w_mbytes_per_sec": 0 00:30:06.674 }, 00:30:06.674 "claimed": false, 00:30:06.674 "zoned": false, 00:30:06.674 "supported_io_types": { 00:30:06.674 "read": true, 00:30:06.674 "write": true, 00:30:06.674 "unmap": true, 00:30:06.674 "flush": false, 00:30:06.674 "reset": true, 00:30:06.674 "nvme_admin": false, 00:30:06.674 "nvme_io": false, 00:30:06.674 "nvme_io_md": false, 00:30:06.674 "write_zeroes": true, 00:30:06.674 "zcopy": false, 00:30:06.674 "get_zone_info": false, 00:30:06.674 "zone_management": false, 00:30:06.674 "zone_append": false, 00:30:06.674 "compare": false, 00:30:06.674 "compare_and_write": false, 00:30:06.674 "abort": false, 00:30:06.674 "seek_hole": true, 00:30:06.674 "seek_data": true, 00:30:06.674 "copy": false, 00:30:06.674 "nvme_iov_md": false 00:30:06.674 }, 00:30:06.674 "driver_specific": { 00:30:06.674 "lvol": { 00:30:06.674 "lvol_store_uuid": "7807b8db-1043-4a4b-bc36-a05126a0bdbc", 00:30:06.674 "base_bdev": "aio_bdev", 00:30:06.674 "thin_provision": false, 00:30:06.674 "num_allocated_clusters": 38, 00:30:06.674 "snapshot": false, 00:30:06.674 "clone": false, 00:30:06.674 "esnap_clone": false 00:30:06.674 } 00:30:06.674 } 00:30:06.674 } 00:30:06.674 ] 00:30:06.674 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:06.674 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:06.674 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:30:06.933 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:06.933 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:30:06.933 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:07.192 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:07.192 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 216bc475-575a-4bb7-8955-d723c82bfe4e 00:30:07.192 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7807b8db-1043-4a4b-bc36-a05126a0bdbc 00:30:07.451 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:07.711 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:07.711 00:30:07.711 real 0m15.986s 00:30:07.711 user 0m15.559s 00:30:07.711 sys 0m1.493s 00:30:07.711 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.711 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.711 ************************************ 00:30:07.711 END TEST lvs_grow_clean 00:30:07.711 ************************************ 00:30:07.711 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:07.711 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:07.711 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.711 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:07.711 ************************************ 00:30:07.711 START TEST lvs_grow_dirty 00:30:07.711 ************************************ 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:07.712 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:07.970 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:07.971 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:08.229 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d7620291-4060-442f-9970-d190a35c379f 00:30:08.229 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:08.229 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:08.229 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:08.229 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:08.229 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d7620291-4060-442f-9970-d190a35c379f lvol 150 00:30:08.488 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6605eb7e-fdb4-4de1-aebe-f584c944695d 00:30:08.488 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:08.488 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:08.747 [2024-12-06 11:31:41.483918] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:08.747 [2024-12-06 11:31:41.484040] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:08.747 true 00:30:08.747 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:08.747 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:08.747 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:08.747 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:09.005 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6605eb7e-fdb4-4de1-aebe-f584c944695d 00:30:09.263 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:09.263 [2024-12-06 11:31:42.172465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.263 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:09.521 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1928070 00:30:09.522 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.522 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:09.522 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1928070 /var/tmp/bdevperf.sock 00:30:09.522 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1928070 ']' 00:30:09.522 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.522 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.522 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.522 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.522 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:09.522 [2024-12-06 11:31:42.418679] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:30:09.522 [2024-12-06 11:31:42.418726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1928070 ] 00:30:09.780 [2024-12-06 11:31:42.492479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.780 [2024-12-06 11:31:42.531743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.348 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.348 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:10.348 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:10.605 Nvme0n1 00:30:10.605 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:10.863 [ 00:30:10.863 { 00:30:10.863 "name": "Nvme0n1", 00:30:10.863 "aliases": [ 00:30:10.863 "6605eb7e-fdb4-4de1-aebe-f584c944695d" 00:30:10.863 ], 00:30:10.863 "product_name": "NVMe disk", 00:30:10.863 "block_size": 4096, 00:30:10.863 "num_blocks": 38912, 00:30:10.863 "uuid": "6605eb7e-fdb4-4de1-aebe-f584c944695d", 00:30:10.863 "numa_id": 1, 00:30:10.863 "assigned_rate_limits": { 00:30:10.863 "rw_ios_per_sec": 0, 00:30:10.863 "rw_mbytes_per_sec": 0, 00:30:10.863 "r_mbytes_per_sec": 0, 00:30:10.863 "w_mbytes_per_sec": 0 00:30:10.863 }, 00:30:10.863 "claimed": false, 00:30:10.863 "zoned": false, 00:30:10.863 "supported_io_types": { 00:30:10.863 "read": true, 00:30:10.863 "write": true, 00:30:10.863 "unmap": true, 00:30:10.863 "flush": true, 00:30:10.863 "reset": true, 00:30:10.863 "nvme_admin": true, 00:30:10.863 "nvme_io": true, 00:30:10.863 "nvme_io_md": false, 00:30:10.863 "write_zeroes": true, 00:30:10.863 "zcopy": false, 00:30:10.863 "get_zone_info": false, 00:30:10.863 "zone_management": false, 00:30:10.863 "zone_append": false, 00:30:10.863 "compare": true, 00:30:10.863 "compare_and_write": true, 00:30:10.863 "abort": true, 00:30:10.863 "seek_hole": false, 00:30:10.863 "seek_data": false, 00:30:10.863 "copy": true, 00:30:10.863 "nvme_iov_md": false 00:30:10.863 }, 00:30:10.863 "memory_domains": [ 00:30:10.863 { 00:30:10.863 "dma_device_id": "system", 00:30:10.863 "dma_device_type": 1 00:30:10.863 } 00:30:10.863 ], 00:30:10.863 "driver_specific": { 00:30:10.863 "nvme": [ 00:30:10.863 { 00:30:10.863 "trid": { 00:30:10.863 "trtype": "TCP", 00:30:10.863 "adrfam": "IPv4", 00:30:10.863 "traddr": "10.0.0.2", 00:30:10.863 "trsvcid": "4420", 00:30:10.863 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:10.863 }, 00:30:10.863 "ctrlr_data": { 00:30:10.863 "cntlid": 1, 00:30:10.863 "vendor_id": "0x8086", 00:30:10.863 "model_number": "SPDK bdev Controller", 00:30:10.863 "serial_number": "SPDK0", 00:30:10.863 "firmware_revision": "25.01", 00:30:10.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.863 "oacs": { 00:30:10.863 "security": 0, 00:30:10.863 "format": 0, 00:30:10.863 "firmware": 0, 00:30:10.863 "ns_manage": 0 00:30:10.863 }, 00:30:10.863 "multi_ctrlr": true, 00:30:10.863 "ana_reporting": false 00:30:10.863 }, 00:30:10.863 "vs": { 00:30:10.863 "nvme_version": "1.3" 00:30:10.863 }, 00:30:10.863 "ns_data": { 00:30:10.863 "id": 1, 00:30:10.863 "can_share": true 00:30:10.863 } 00:30:10.863 } 00:30:10.863 ], 00:30:10.863 "mp_policy": "active_passive" 00:30:10.863 } 00:30:10.863 } 00:30:10.863 ] 00:30:10.863 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1928330 00:30:10.863 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:10.863 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:10.863 Running I/O for 10 seconds... 00:30:11.797 Latency(us) 00:30:11.797 [2024-12-06T10:31:44.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.797 Nvme0n1 : 1.00 24765.00 96.74 0.00 0.00 0.00 0.00 0.00 00:30:11.797 [2024-12-06T10:31:44.735Z] =================================================================================================================== 00:30:11.797 [2024-12-06T10:31:44.735Z] Total : 24765.00 96.74 0.00 0.00 0.00 0.00 0.00 00:30:11.797 00:30:12.732 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d7620291-4060-442f-9970-d190a35c379f 00:30:12.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.991 Nvme0n1 : 2.00 25082.50 97.98 0.00 0.00 0.00 0.00 0.00 00:30:12.991 [2024-12-06T10:31:45.929Z] =================================================================================================================== 00:30:12.991 [2024-12-06T10:31:45.929Z] Total : 25082.50 97.98 0.00 0.00 0.00 0.00 0.00 00:30:12.991 00:30:12.991 true 00:30:12.991 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:12.991 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:13.249 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:13.249 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:13.249 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1928330 00:30:13.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.817 Nvme0n1 : 3.00 25209.67 98.48 0.00 0.00 0.00 0.00 0.00 00:30:13.817 [2024-12-06T10:31:46.755Z] =================================================================================================================== 00:30:13.817 [2024-12-06T10:31:46.755Z] Total : 25209.67 98.48 0.00 0.00 0.00 0.00 0.00 00:30:13.817 00:30:15.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.212 Nvme0n1 : 4.00 25301.25 98.83 0.00 0.00 0.00 0.00 0.00 00:30:15.212 [2024-12-06T10:31:48.150Z] =================================================================================================================== 00:30:15.212 [2024-12-06T10:31:48.150Z] Total : 25301.25 98.83 0.00 0.00 0.00 0.00 0.00 00:30:15.212 00:30:15.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.806 Nvme0n1 : 5.00 25365.80 99.09 0.00 0.00 0.00 0.00 0.00 00:30:15.806 [2024-12-06T10:31:48.744Z] =================================================================================================================== 00:30:15.806 [2024-12-06T10:31:48.744Z] Total : 25365.80 99.09 0.00 0.00 0.00 0.00 0.00 00:30:15.806 00:30:16.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.842 Nvme0n1 : 6.00 25392.67 99.19 0.00 0.00 0.00 0.00 0.00 00:30:16.842 [2024-12-06T10:31:49.780Z] =================================================================================================================== 00:30:16.842 [2024-12-06T10:31:49.780Z] Total : 25392.67 99.19 0.00 0.00 0.00 0.00 0.00 00:30:16.842 00:30:18.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.218 Nvme0n1 : 7.00 25357.43 99.05 0.00 0.00 0.00 0.00 0.00 00:30:18.218 [2024-12-06T10:31:51.156Z] =================================================================================================================== 00:30:18.218 [2024-12-06T10:31:51.156Z] Total : 25357.43 99.05 0.00 0.00 0.00 0.00 0.00 00:30:18.218 00:30:19.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.154 Nvme0n1 : 8.00 25394.50 99.20 0.00 0.00 0.00 0.00 0.00 00:30:19.154 [2024-12-06T10:31:52.092Z] =================================================================================================================== 00:30:19.154 [2024-12-06T10:31:52.092Z] Total : 25394.50 99.20 0.00 0.00 0.00 0.00 0.00 00:30:19.154 00:30:20.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.090 Nvme0n1 : 9.00 25423.33 99.31 0.00 0.00 0.00 0.00 0.00 00:30:20.090 [2024-12-06T10:31:53.028Z] =================================================================================================================== 00:30:20.090 [2024-12-06T10:31:53.028Z] Total : 25423.33 99.31 0.00 0.00 0.00 0.00 0.00 00:30:20.090 00:30:21.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.027 Nvme0n1 : 10.00 25446.40 99.40 0.00 0.00 0.00 0.00 0.00 00:30:21.027 [2024-12-06T10:31:53.965Z] =================================================================================================================== 00:30:21.027 [2024-12-06T10:31:53.965Z] Total : 25446.40 99.40 0.00 0.00 0.00 0.00 0.00 00:30:21.027 00:30:21.027 00:30:21.027 Latency(us) 00:30:21.027 [2024-12-06T10:31:53.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.027 Nvme0n1 : 10.00 25451.76 99.42 0.00 0.00 5026.72 2934.23 25261.15 00:30:21.027 [2024-12-06T10:31:53.965Z] =================================================================================================================== 00:30:21.027 [2024-12-06T10:31:53.965Z] Total : 25451.76 99.42 0.00 0.00 5026.72 2934.23 25261.15 00:30:21.027 { 00:30:21.027 "results": [ 00:30:21.027 { 00:30:21.027 "job": "Nvme0n1", 00:30:21.027 "core_mask": "0x2", 00:30:21.027 "workload": "randwrite", 00:30:21.027 "status": "finished", 00:30:21.027 "queue_depth": 128, 00:30:21.027 "io_size": 4096, 00:30:21.027 "runtime": 10.002922, 00:30:21.027 "iops": 25451.762994852903, 00:30:21.027 "mibps": 99.42094919864415, 00:30:21.027 "io_failed": 0, 00:30:21.027 "io_timeout": 0, 00:30:21.027 "avg_latency_us": 5026.715648795649, 00:30:21.027 "min_latency_us": 2934.2254545454543, 00:30:21.027 "max_latency_us": 25261.14909090909 00:30:21.027 } 00:30:21.027 ], 00:30:21.027 "core_count": 1 00:30:21.027 } 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1928070 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1928070 ']' 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1928070 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1928070 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1928070' 00:30:21.027 killing process with pid 1928070 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1928070 00:30:21.027 Received shutdown signal, test time was about 10.000000 seconds 00:30:21.027 00:30:21.027 Latency(us) 00:30:21.027 [2024-12-06T10:31:53.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.027 [2024-12-06T10:31:53.965Z] =================================================================================================================== 00:30:21.027 [2024-12-06T10:31:53.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.027 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1928070 00:30:21.286 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:21.286 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:21.545 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:21.545 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1924820 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1924820 00:30:21.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1924820 Killed "${NVMF_APP[@]}" "$@" 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1930182 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1930182 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1930182 ']' 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.804 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:21.804 [2024-12-06 11:31:54.597802] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:21.804 [2024-12-06 11:31:54.598682] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:30:21.804 [2024-12-06 11:31:54.598720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.804 [2024-12-06 11:31:54.671181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.804 [2024-12-06 11:31:54.708738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.804 [2024-12-06 11:31:54.708773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.804 [2024-12-06 11:31:54.708780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.804 [2024-12-06 11:31:54.708785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.804 [2024-12-06 11:31:54.708790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.804 [2024-12-06 11:31:54.709345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.063 [2024-12-06 11:31:54.776396] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:22.063 [2024-12-06 11:31:54.776601] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:22.063 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.063 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:22.063 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.063 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.063 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:22.063 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.063 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:22.063 [2024-12-06 11:31:54.998663] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:22.063 [2024-12-06 11:31:54.998855] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:22.063 [2024-12-06 11:31:54.998939] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:22.330 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:22.330 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6605eb7e-fdb4-4de1-aebe-f584c944695d 00:30:22.330 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6605eb7e-fdb4-4de1-aebe-f584c944695d 00:30:22.330 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:22.330 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:22.330 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:22.330 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:22.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:22.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6605eb7e-fdb4-4de1-aebe-f584c944695d -t 2000 00:30:22.594 [ 00:30:22.594 { 00:30:22.594 "name": "6605eb7e-fdb4-4de1-aebe-f584c944695d", 00:30:22.594 "aliases": [ 00:30:22.594 "lvs/lvol" 00:30:22.594 ], 00:30:22.594 "product_name": "Logical Volume", 00:30:22.594 "block_size": 4096, 00:30:22.594 "num_blocks": 38912, 00:30:22.594 "uuid": "6605eb7e-fdb4-4de1-aebe-f584c944695d", 00:30:22.594 "assigned_rate_limits": { 00:30:22.594 "rw_ios_per_sec": 0, 00:30:22.594 "rw_mbytes_per_sec": 0, 00:30:22.594 "r_mbytes_per_sec": 0, 00:30:22.594 "w_mbytes_per_sec": 0 00:30:22.594 }, 00:30:22.594 "claimed": false, 00:30:22.594 "zoned": false, 00:30:22.594 "supported_io_types": { 00:30:22.594 "read": true, 00:30:22.594 "write": true, 00:30:22.594 "unmap": true, 00:30:22.594 "flush": false, 00:30:22.594 "reset": true, 00:30:22.594 "nvme_admin": false, 00:30:22.594 "nvme_io": false, 00:30:22.594 "nvme_io_md": false, 00:30:22.594 "write_zeroes": true, 00:30:22.594 "zcopy": false, 00:30:22.594 "get_zone_info": false, 00:30:22.594 "zone_management": false, 00:30:22.594 "zone_append": false, 00:30:22.594 "compare": false, 00:30:22.594 "compare_and_write": false, 00:30:22.594 "abort": false, 00:30:22.594 "seek_hole": true, 00:30:22.594 "seek_data": true, 00:30:22.594 "copy": false, 00:30:22.594 "nvme_iov_md": false 00:30:22.594 }, 00:30:22.594 "driver_specific": { 00:30:22.594 "lvol": { 00:30:22.594 "lvol_store_uuid": "d7620291-4060-442f-9970-d190a35c379f", 00:30:22.594 "base_bdev": "aio_bdev", 00:30:22.594 "thin_provision": false, 00:30:22.594 "num_allocated_clusters": 38, 00:30:22.594 "snapshot": false, 00:30:22.594 "clone": false, 00:30:22.594 "esnap_clone": false 00:30:22.594 } 00:30:22.594 } 00:30:22.594 } 00:30:22.594 ] 00:30:22.594 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:22.594 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:22.594 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:22.852 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:22.852 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:22.852 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:22.852 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:22.852 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:23.110 [2024-12-06 11:31:55.885791] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:23.110 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:23.368 request: 00:30:23.368 { 00:30:23.368 "uuid": "d7620291-4060-442f-9970-d190a35c379f", 00:30:23.368 "method": "bdev_lvol_get_lvstores", 00:30:23.368 "req_id": 1 00:30:23.368 } 00:30:23.368 Got JSON-RPC error response 00:30:23.368 response: 00:30:23.368 { 00:30:23.368 "code": -19, 00:30:23.368 "message": "No such device" 00:30:23.368 } 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:23.368 aio_bdev 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6605eb7e-fdb4-4de1-aebe-f584c944695d 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6605eb7e-fdb4-4de1-aebe-f584c944695d 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:23.368 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:23.626 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6605eb7e-fdb4-4de1-aebe-f584c944695d -t 2000 00:30:23.884 [ 00:30:23.884 { 00:30:23.884 "name": "6605eb7e-fdb4-4de1-aebe-f584c944695d", 00:30:23.884 "aliases": [ 00:30:23.884 "lvs/lvol" 00:30:23.884 ], 00:30:23.884 "product_name": "Logical Volume", 00:30:23.884 "block_size": 4096, 00:30:23.884 "num_blocks": 38912, 00:30:23.884 "uuid": "6605eb7e-fdb4-4de1-aebe-f584c944695d", 00:30:23.884 "assigned_rate_limits": { 00:30:23.884 "rw_ios_per_sec": 0, 00:30:23.884 "rw_mbytes_per_sec": 0, 00:30:23.884 "r_mbytes_per_sec": 0, 00:30:23.884 "w_mbytes_per_sec": 0 00:30:23.884 }, 00:30:23.884 "claimed": false, 00:30:23.884 "zoned": false, 00:30:23.884 "supported_io_types": { 00:30:23.884 "read": true, 00:30:23.884 "write": true, 00:30:23.884 "unmap": true, 00:30:23.884 "flush": false, 00:30:23.884 "reset": true, 00:30:23.884 "nvme_admin": false, 00:30:23.884 "nvme_io": false, 00:30:23.884 "nvme_io_md": false, 00:30:23.884 "write_zeroes": true, 00:30:23.884 "zcopy": false, 00:30:23.884 "get_zone_info": false, 00:30:23.884 "zone_management": false, 00:30:23.884 "zone_append": false, 00:30:23.884 "compare": false, 00:30:23.884 "compare_and_write": false, 00:30:23.884 "abort": false, 00:30:23.884 "seek_hole": true, 00:30:23.884 "seek_data": true, 00:30:23.884 "copy": false, 00:30:23.884 "nvme_iov_md": false 00:30:23.884 }, 00:30:23.884 "driver_specific": { 00:30:23.884 "lvol": { 00:30:23.884 "lvol_store_uuid": "d7620291-4060-442f-9970-d190a35c379f", 00:30:23.884 "base_bdev": "aio_bdev", 00:30:23.884 "thin_provision": false, 00:30:23.884 "num_allocated_clusters": 38, 00:30:23.884 "snapshot": false, 00:30:23.884 "clone": false, 00:30:23.884 "esnap_clone": false 00:30:23.884 } 00:30:23.884 } 00:30:23.884 } 00:30:23.884 ] 00:30:23.885 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:23.885 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:23.885 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:23.885 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:24.143 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d7620291-4060-442f-9970-d190a35c379f 00:30:24.143 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:24.143 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:24.143 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6605eb7e-fdb4-4de1-aebe-f584c944695d 00:30:24.402 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d7620291-4060-442f-9970-d190a35c379f 00:30:24.662 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:24.662 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:24.662 00:30:24.662 real 0m16.974s 00:30:24.662 user 0m34.519s 00:30:24.662 sys 0m3.769s 00:30:24.662 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.662 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:24.662 ************************************ 00:30:24.662 END TEST lvs_grow_dirty 00:30:24.662 ************************************ 00:30:24.662 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:24.662 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:24.662 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:24.662 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:24.921 nvmf_trace.0 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.921 rmmod nvme_tcp 00:30:24.921 rmmod nvme_fabrics 00:30:24.921 rmmod nvme_keyring 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1930182 ']' 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1930182 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1930182 ']' 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1930182 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1930182 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1930182' 00:30:24.921 killing process with pid 1930182 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1930182 00:30:24.921 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1930182 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.181 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.088 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.088 00:30:27.088 real 0m42.138s 00:30:27.088 user 0m52.655s 00:30:27.088 sys 0m10.105s 00:30:27.088 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.088 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:27.088 ************************************ 00:30:27.088 END TEST nvmf_lvs_grow 00:30:27.088 ************************************ 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:27.348 ************************************ 00:30:27.348 START TEST nvmf_bdev_io_wait 00:30:27.348 ************************************ 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:27.348 * Looking for test storage... 00:30:27.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:27.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.348 --rc genhtml_branch_coverage=1 00:30:27.348 --rc genhtml_function_coverage=1 00:30:27.348 --rc genhtml_legend=1 00:30:27.348 --rc geninfo_all_blocks=1 00:30:27.348 --rc geninfo_unexecuted_blocks=1 00:30:27.348 00:30:27.348 ' 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:27.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.348 --rc genhtml_branch_coverage=1 00:30:27.348 --rc genhtml_function_coverage=1 00:30:27.348 --rc genhtml_legend=1 00:30:27.348 --rc geninfo_all_blocks=1 00:30:27.348 --rc geninfo_unexecuted_blocks=1 00:30:27.348 00:30:27.348 ' 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:27.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.348 --rc genhtml_branch_coverage=1 00:30:27.348 --rc genhtml_function_coverage=1 00:30:27.348 --rc genhtml_legend=1 00:30:27.348 --rc geninfo_all_blocks=1 00:30:27.348 --rc geninfo_unexecuted_blocks=1 00:30:27.348 00:30:27.348 ' 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:27.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.348 --rc genhtml_branch_coverage=1 00:30:27.348 --rc genhtml_function_coverage=1 00:30:27.348 --rc genhtml_legend=1 00:30:27.348 --rc geninfo_all_blocks=1 00:30:27.348 --rc geninfo_unexecuted_blocks=1 00:30:27.348 00:30:27.348 ' 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.348 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.608 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.172 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:34.173 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:34.173 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:34.173 Found net devices under 0000:af:00.0: cvl_0_0 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:34.173 Found net devices under 0000:af:00.1: cvl_0_1 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.173 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.173 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.173 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.173 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.173 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.173 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.173 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.173 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.173 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:30:34.173 00:30:34.173 --- 10.0.0.2 ping statistics --- 00:30:34.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.173 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:30:34.173 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:30:34.173 00:30:34.173 --- 10.0.0.1 ping statistics --- 00:30:34.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.174 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1934364 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1934364 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1934364 ']' 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.174 11:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.174 [2024-12-06 11:32:06.264206] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:34.174 [2024-12-06 11:32:06.265242] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:30:34.174 [2024-12-06 11:32:06.265274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.174 [2024-12-06 11:32:06.340922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:34.174 [2024-12-06 11:32:06.381610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.174 [2024-12-06 11:32:06.381647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.174 [2024-12-06 11:32:06.381654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.174 [2024-12-06 11:32:06.381659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.174 [2024-12-06 11:32:06.381664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.174 [2024-12-06 11:32:06.383334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.174 [2024-12-06 11:32:06.383377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.174 [2024-12-06 11:32:06.383488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.174 [2024-12-06 11:32:06.383490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:34.174 [2024-12-06 11:32:06.383991] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:34.174 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:34.174 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:34.174 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:34.174 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:34.174 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 [2024-12-06 11:32:07.212404] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:34.434 [2024-12-06 11:32:07.212575] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:34.434 [2024-12-06 11:32:07.213002] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:34.434 [2024-12-06 11:32:07.213298] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 [2024-12-06 11:32:07.224472] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 Malloc0 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 [2024-12-06 11:32:07.296631] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1934521 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1934523 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.434 { 00:30:34.434 "params": { 00:30:34.434 "name": "Nvme$subsystem", 00:30:34.434 "trtype": "$TEST_TRANSPORT", 00:30:34.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.434 "adrfam": "ipv4", 00:30:34.434 "trsvcid": "$NVMF_PORT", 00:30:34.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.434 "hdgst": ${hdgst:-false}, 00:30:34.434 "ddgst": ${ddgst:-false} 00:30:34.434 }, 00:30:34.434 "method": "bdev_nvme_attach_controller" 00:30:34.434 } 00:30:34.434 EOF 00:30:34.434 )") 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1934525 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1934528 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.434 { 00:30:34.434 "params": { 00:30:34.434 "name": "Nvme$subsystem", 00:30:34.434 "trtype": "$TEST_TRANSPORT", 00:30:34.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.434 "adrfam": "ipv4", 00:30:34.434 "trsvcid": "$NVMF_PORT", 00:30:34.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.434 "hdgst": ${hdgst:-false}, 00:30:34.434 "ddgst": ${ddgst:-false} 00:30:34.434 }, 00:30:34.434 "method": "bdev_nvme_attach_controller" 00:30:34.434 } 00:30:34.434 EOF 00:30:34.434 )") 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:34.434 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.434 { 00:30:34.435 "params": { 00:30:34.435 "name": "Nvme$subsystem", 00:30:34.435 "trtype": "$TEST_TRANSPORT", 00:30:34.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.435 "adrfam": "ipv4", 00:30:34.435 "trsvcid": "$NVMF_PORT", 00:30:34.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.435 "hdgst": ${hdgst:-false}, 00:30:34.435 "ddgst": ${ddgst:-false} 00:30:34.435 }, 00:30:34.435 "method": "bdev_nvme_attach_controller" 00:30:34.435 } 00:30:34.435 EOF 00:30:34.435 )") 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.435 { 00:30:34.435 "params": { 00:30:34.435 "name": "Nvme$subsystem", 00:30:34.435 "trtype": "$TEST_TRANSPORT", 00:30:34.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.435 "adrfam": "ipv4", 00:30:34.435 "trsvcid": "$NVMF_PORT", 00:30:34.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.435 "hdgst": ${hdgst:-false}, 00:30:34.435 "ddgst": ${ddgst:-false} 00:30:34.435 }, 00:30:34.435 "method": "bdev_nvme_attach_controller" 00:30:34.435 } 00:30:34.435 EOF 00:30:34.435 )") 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1934521 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.435 "params": { 00:30:34.435 "name": "Nvme1", 00:30:34.435 "trtype": "tcp", 00:30:34.435 "traddr": "10.0.0.2", 00:30:34.435 "adrfam": "ipv4", 00:30:34.435 "trsvcid": "4420", 00:30:34.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.435 "hdgst": false, 00:30:34.435 "ddgst": false 00:30:34.435 }, 00:30:34.435 "method": "bdev_nvme_attach_controller" 00:30:34.435 }' 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.435 "params": { 00:30:34.435 "name": "Nvme1", 00:30:34.435 "trtype": "tcp", 00:30:34.435 "traddr": "10.0.0.2", 00:30:34.435 "adrfam": "ipv4", 00:30:34.435 "trsvcid": "4420", 00:30:34.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.435 "hdgst": false, 00:30:34.435 "ddgst": false 00:30:34.435 }, 00:30:34.435 "method": "bdev_nvme_attach_controller" 00:30:34.435 }' 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.435 "params": { 00:30:34.435 "name": "Nvme1", 00:30:34.435 "trtype": "tcp", 00:30:34.435 "traddr": "10.0.0.2", 00:30:34.435 "adrfam": "ipv4", 00:30:34.435 "trsvcid": "4420", 00:30:34.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.435 "hdgst": false, 00:30:34.435 "ddgst": false 00:30:34.435 }, 00:30:34.435 "method": "bdev_nvme_attach_controller" 00:30:34.435 }' 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:34.435 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.435 "params": { 00:30:34.435 "name": "Nvme1", 00:30:34.435 "trtype": "tcp", 00:30:34.435 "traddr": "10.0.0.2", 00:30:34.435 "adrfam": "ipv4", 00:30:34.435 "trsvcid": "4420", 00:30:34.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.435 "hdgst": false, 00:30:34.435 "ddgst": false 00:30:34.435 }, 00:30:34.435 "method": "bdev_nvme_attach_controller" 00:30:34.435 }' 00:30:34.435 [2024-12-06 11:32:07.346982] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:30:34.435 [2024-12-06 11:32:07.347028] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:34.435 [2024-12-06 11:32:07.348714] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:30:34.435 [2024-12-06 11:32:07.348752] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:34.435 [2024-12-06 11:32:07.349805] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:30:34.435 [2024-12-06 11:32:07.349812] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:30:34.435 [2024-12-06 11:32:07.349849] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 11:32:07.349849] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:34.435 --proc-type=auto ] 00:30:34.694 [2024-12-06 11:32:07.524596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.694 [2024-12-06 11:32:07.564815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:34.694 [2024-12-06 11:32:07.615357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.953 [2024-12-06 11:32:07.659173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:34.953 [2024-12-06 11:32:07.669925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.953 [2024-12-06 11:32:07.710209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:34.953 [2024-12-06 11:32:07.727995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.953 [2024-12-06 11:32:07.767911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:34.953 Running I/O for 1 seconds... 00:30:34.953 Running I/O for 1 seconds... 00:30:34.953 Running I/O for 1 seconds... 00:30:35.212 Running I/O for 1 seconds... 00:30:36.146 8426.00 IOPS, 32.91 MiB/s 00:30:36.146 Latency(us) 00:30:36.146 [2024-12-06T10:32:09.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.146 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:36.146 Nvme1n1 : 1.01 8441.16 32.97 0.00 0.00 15047.89 3008.70 17754.30 00:30:36.146 [2024-12-06T10:32:09.084Z] =================================================================================================================== 00:30:36.146 [2024-12-06T10:32:09.084Z] Total : 8441.16 32.97 0.00 0.00 15047.89 3008.70 17754.30 00:30:36.146 12791.00 IOPS, 49.96 MiB/s 00:30:36.146 Latency(us) 00:30:36.146 [2024-12-06T10:32:09.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.147 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:36.147 Nvme1n1 : 1.01 12833.59 50.13 0.00 0.00 9939.04 3738.53 13583.83 00:30:36.147 [2024-12-06T10:32:09.085Z] =================================================================================================================== 00:30:36.147 [2024-12-06T10:32:09.085Z] Total : 12833.59 50.13 0.00 0.00 9939.04 3738.53 13583.83 00:30:36.147 8304.00 IOPS, 32.44 MiB/s 00:30:36.147 Latency(us) 00:30:36.147 [2024-12-06T10:32:09.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.147 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:36.147 Nvme1n1 : 1.01 8434.18 32.95 0.00 0.00 15145.02 2934.23 28001.75 00:30:36.147 [2024-12-06T10:32:09.085Z] =================================================================================================================== 00:30:36.147 [2024-12-06T10:32:09.085Z] Total : 8434.18 32.95 0.00 0.00 15145.02 2934.23 28001.75 00:30:36.147 264400.00 IOPS, 1032.81 MiB/s 00:30:36.147 Latency(us) 00:30:36.147 [2024-12-06T10:32:09.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.147 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:36.147 Nvme1n1 : 1.00 264028.33 1031.36 0.00 0.00 482.30 205.73 1392.64 00:30:36.147 [2024-12-06T10:32:09.085Z] =================================================================================================================== 00:30:36.147 [2024-12-06T10:32:09.085Z] Total : 264028.33 1031.36 0.00 0.00 482.30 205.73 1392.64 00:30:36.147 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1934523 00:30:36.147 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1934525 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1934528 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.147 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.147 rmmod nvme_tcp 00:30:36.405 rmmod nvme_fabrics 00:30:36.405 rmmod nvme_keyring 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1934364 ']' 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1934364 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1934364 ']' 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1934364 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1934364 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1934364' 00:30:36.405 killing process with pid 1934364 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1934364 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1934364 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.405 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.940 00:30:38.940 real 0m11.311s 00:30:38.940 user 0m14.454s 00:30:38.940 sys 0m6.367s 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:38.940 ************************************ 00:30:38.940 END TEST nvmf_bdev_io_wait 00:30:38.940 ************************************ 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.940 ************************************ 00:30:38.940 START TEST nvmf_queue_depth 00:30:38.940 ************************************ 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:38.940 * Looking for test storage... 00:30:38.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.940 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:38.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.941 --rc genhtml_branch_coverage=1 00:30:38.941 --rc genhtml_function_coverage=1 00:30:38.941 --rc genhtml_legend=1 00:30:38.941 --rc geninfo_all_blocks=1 00:30:38.941 --rc geninfo_unexecuted_blocks=1 00:30:38.941 00:30:38.941 ' 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:38.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.941 --rc genhtml_branch_coverage=1 00:30:38.941 --rc genhtml_function_coverage=1 00:30:38.941 --rc genhtml_legend=1 00:30:38.941 --rc geninfo_all_blocks=1 00:30:38.941 --rc geninfo_unexecuted_blocks=1 00:30:38.941 00:30:38.941 ' 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:38.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.941 --rc genhtml_branch_coverage=1 00:30:38.941 --rc genhtml_function_coverage=1 00:30:38.941 --rc genhtml_legend=1 00:30:38.941 --rc geninfo_all_blocks=1 00:30:38.941 --rc geninfo_unexecuted_blocks=1 00:30:38.941 00:30:38.941 ' 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:38.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.941 --rc genhtml_branch_coverage=1 00:30:38.941 --rc genhtml_function_coverage=1 00:30:38.941 --rc genhtml_legend=1 00:30:38.941 --rc geninfo_all_blocks=1 00:30:38.941 --rc geninfo_unexecuted_blocks=1 00:30:38.941 00:30:38.941 ' 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.941 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.942 11:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.514 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.514 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.514 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.514 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:45.515 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:45.515 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:45.515 Found net devices under 0000:af:00.0: cvl_0_0 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:45.515 Found net devices under 0000:af:00.1: cvl_0_1 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:30:45.515 00:30:45.515 --- 10.0.0.2 ping statistics --- 00:30:45.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.515 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:30:45.515 00:30:45.515 --- 10.0.0.1 ping statistics --- 00:30:45.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.515 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1938532 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1938532 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1938532 ']' 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.515 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.515 [2024-12-06 11:32:17.693825] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.515 [2024-12-06 11:32:17.694690] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:30:45.515 [2024-12-06 11:32:17.694720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.515 [2024-12-06 11:32:17.772693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.515 [2024-12-06 11:32:17.810639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.515 [2024-12-06 11:32:17.810673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.515 [2024-12-06 11:32:17.810680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.516 [2024-12-06 11:32:17.810686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.516 [2024-12-06 11:32:17.810691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.516 [2024-12-06 11:32:17.811238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.516 [2024-12-06 11:32:17.876304] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.516 [2024-12-06 11:32:17.876491] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.773 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.773 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:45.773 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.773 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.773 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.773 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.774 [2024-12-06 11:32:18.559890] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.774 Malloc0 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.774 [2024-12-06 11:32:18.635895] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1938719 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1938719 /var/tmp/bdevperf.sock 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1938719 ']' 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.774 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.774 [2024-12-06 11:32:18.687496] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:30:45.774 [2024-12-06 11:32:18.687554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938719 ] 00:30:46.031 [2024-12-06 11:32:18.760910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.031 [2024-12-06 11:32:18.800207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.031 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.031 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:46.031 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:46.031 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.031 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.290 NVMe0n1 00:30:46.290 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.290 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:46.290 Running I/O for 10 seconds... 00:30:48.604 13296.00 IOPS, 51.94 MiB/s [2024-12-06T10:32:22.480Z] 13158.00 IOPS, 51.40 MiB/s [2024-12-06T10:32:23.418Z] 13304.00 IOPS, 51.97 MiB/s [2024-12-06T10:32:24.356Z] 13319.25 IOPS, 52.03 MiB/s [2024-12-06T10:32:25.292Z] 13418.60 IOPS, 52.42 MiB/s [2024-12-06T10:32:26.225Z] 13480.17 IOPS, 52.66 MiB/s [2024-12-06T10:32:27.600Z] 13490.00 IOPS, 52.70 MiB/s [2024-12-06T10:32:28.534Z] 13540.62 IOPS, 52.89 MiB/s [2024-12-06T10:32:29.468Z] 13543.44 IOPS, 52.90 MiB/s [2024-12-06T10:32:29.468Z] 13551.10 IOPS, 52.93 MiB/s 00:30:56.530 Latency(us) 00:30:56.530 [2024-12-06T10:32:29.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.530 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:56.530 Verification LBA range: start 0x0 length 0x4000 00:30:56.530 NVMe0n1 : 10.05 13586.16 53.07 0.00 0.00 75119.31 11200.70 50045.67 00:30:56.530 [2024-12-06T10:32:29.468Z] =================================================================================================================== 00:30:56.530 [2024-12-06T10:32:29.468Z] Total : 13586.16 53.07 0.00 0.00 75119.31 11200.70 50045.67 00:30:56.530 { 00:30:56.530 "results": [ 00:30:56.530 { 00:30:56.530 "job": "NVMe0n1", 00:30:56.530 "core_mask": "0x1", 00:30:56.530 "workload": "verify", 00:30:56.530 "status": "finished", 00:30:56.530 "verify_range": { 00:30:56.530 "start": 0, 00:30:56.530 "length": 16384 00:30:56.530 }, 00:30:56.530 "queue_depth": 1024, 00:30:56.530 "io_size": 4096, 00:30:56.530 "runtime": 10.049564, 00:30:56.530 "iops": 13586.161548899037, 00:30:56.530 "mibps": 53.070943550386865, 00:30:56.530 "io_failed": 0, 00:30:56.530 "io_timeout": 0, 00:30:56.530 "avg_latency_us": 75119.3092707897, 00:30:56.530 "min_latency_us": 11200.698181818181, 00:30:56.530 "max_latency_us": 50045.67272727273 00:30:56.530 } 00:30:56.530 ], 00:30:56.530 "core_count": 1 00:30:56.530 } 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1938719 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1938719 ']' 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1938719 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1938719 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1938719' 00:30:56.530 killing process with pid 1938719 00:30:56.530 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1938719 00:30:56.530 Received shutdown signal, test time was about 10.000000 seconds 00:30:56.530 00:30:56.530 Latency(us) 00:30:56.530 [2024-12-06T10:32:29.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.531 [2024-12-06T10:32:29.469Z] =================================================================================================================== 00:30:56.531 [2024-12-06T10:32:29.469Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:56.531 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1938719 00:30:56.531 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.789 rmmod nvme_tcp 00:30:56.789 rmmod nvme_fabrics 00:30:56.789 rmmod nvme_keyring 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1938532 ']' 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1938532 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1938532 ']' 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1938532 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1938532 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1938532' 00:30:56.789 killing process with pid 1938532 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1938532 00:30:56.789 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1938532 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.051 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.958 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:58.958 00:30:58.958 real 0m20.350s 00:30:58.958 user 0m22.700s 00:30:58.958 sys 0m6.514s 00:30:58.958 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.958 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:58.958 ************************************ 00:30:58.958 END TEST nvmf_queue_depth 00:30:58.958 ************************************ 00:30:58.958 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:58.958 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:58.958 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.958 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:59.219 ************************************ 00:30:59.219 START TEST nvmf_target_multipath 00:30:59.219 ************************************ 00:30:59.219 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:59.219 * Looking for test storage... 00:30:59.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.219 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:59.219 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:30:59.219 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:59.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.219 --rc genhtml_branch_coverage=1 00:30:59.219 --rc genhtml_function_coverage=1 00:30:59.219 --rc genhtml_legend=1 00:30:59.219 --rc geninfo_all_blocks=1 00:30:59.219 --rc geninfo_unexecuted_blocks=1 00:30:59.219 00:30:59.219 ' 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:59.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.219 --rc genhtml_branch_coverage=1 00:30:59.219 --rc genhtml_function_coverage=1 00:30:59.219 --rc genhtml_legend=1 00:30:59.219 --rc geninfo_all_blocks=1 00:30:59.219 --rc geninfo_unexecuted_blocks=1 00:30:59.219 00:30:59.219 ' 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:59.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.219 --rc genhtml_branch_coverage=1 00:30:59.219 --rc genhtml_function_coverage=1 00:30:59.219 --rc genhtml_legend=1 00:30:59.219 --rc geninfo_all_blocks=1 00:30:59.219 --rc geninfo_unexecuted_blocks=1 00:30:59.219 00:30:59.219 ' 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:59.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.219 --rc genhtml_branch_coverage=1 00:30:59.219 --rc genhtml_function_coverage=1 00:30:59.219 --rc genhtml_legend=1 00:30:59.219 --rc geninfo_all_blocks=1 00:30:59.219 --rc geninfo_unexecuted_blocks=1 00:30:59.219 00:30:59.219 ' 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.219 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.220 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:05.793 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:05.793 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:05.793 Found net devices under 0000:af:00.0: cvl_0_0 00:31:05.793 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:05.794 Found net devices under 0000:af:00.1: cvl_0_1 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:05.794 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:05.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:31:05.794 00:31:05.794 --- 10.0.0.2 ping statistics --- 00:31:05.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.794 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:31:05.794 00:31:05.794 --- 10.0.0.1 ping statistics --- 00:31:05.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.794 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:05.794 only one NIC for nvmf test 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.794 rmmod nvme_tcp 00:31:05.794 rmmod nvme_fabrics 00:31:05.794 rmmod nvme_keyring 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.794 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:07.702 00:31:07.702 real 0m8.345s 00:31:07.702 user 0m1.857s 00:31:07.702 sys 0m4.489s 00:31:07.702 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:07.703 ************************************ 00:31:07.703 END TEST nvmf_target_multipath 00:31:07.703 ************************************ 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:07.703 ************************************ 00:31:07.703 START TEST nvmf_zcopy 00:31:07.703 ************************************ 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:07.703 * Looking for test storage... 00:31:07.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:07.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.703 --rc genhtml_branch_coverage=1 00:31:07.703 --rc genhtml_function_coverage=1 00:31:07.703 --rc genhtml_legend=1 00:31:07.703 --rc geninfo_all_blocks=1 00:31:07.703 --rc geninfo_unexecuted_blocks=1 00:31:07.703 00:31:07.703 ' 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:07.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.703 --rc genhtml_branch_coverage=1 00:31:07.703 --rc genhtml_function_coverage=1 00:31:07.703 --rc genhtml_legend=1 00:31:07.703 --rc geninfo_all_blocks=1 00:31:07.703 --rc geninfo_unexecuted_blocks=1 00:31:07.703 00:31:07.703 ' 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:07.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.703 --rc genhtml_branch_coverage=1 00:31:07.703 --rc genhtml_function_coverage=1 00:31:07.703 --rc genhtml_legend=1 00:31:07.703 --rc geninfo_all_blocks=1 00:31:07.703 --rc geninfo_unexecuted_blocks=1 00:31:07.703 00:31:07.703 ' 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:07.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.703 --rc genhtml_branch_coverage=1 00:31:07.703 --rc genhtml_function_coverage=1 00:31:07.703 --rc genhtml_legend=1 00:31:07.703 --rc geninfo_all_blocks=1 00:31:07.703 --rc geninfo_unexecuted_blocks=1 00:31:07.703 00:31:07.703 ' 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.703 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:07.704 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:14.308 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:14.308 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:14.308 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:14.309 Found net devices under 0000:af:00.0: cvl_0_0 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:14.309 Found net devices under 0000:af:00.1: cvl_0_1 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:14.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:31:14.309 00:31:14.309 --- 10.0.0.2 ping statistics --- 00:31:14.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.309 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:31:14.309 00:31:14.309 --- 10.0.0.1 ping statistics --- 00:31:14.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.309 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1947753 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1947753 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1947753 ']' 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.309 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.309 [2024-12-06 11:32:46.500385] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:14.309 [2024-12-06 11:32:46.501280] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:31:14.309 [2024-12-06 11:32:46.501315] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.309 [2024-12-06 11:32:46.577816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.310 [2024-12-06 11:32:46.615267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.310 [2024-12-06 11:32:46.615299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.310 [2024-12-06 11:32:46.615305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.310 [2024-12-06 11:32:46.615311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.310 [2024-12-06 11:32:46.615315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.310 [2024-12-06 11:32:46.615878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.310 [2024-12-06 11:32:46.681261] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:14.310 [2024-12-06 11:32:46.681450] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.642 [2024-12-06 11:32:47.356565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.642 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.642 [2024-12-06 11:32:47.384751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.643 malloc0 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:14.643 { 00:31:14.643 "params": { 00:31:14.643 "name": "Nvme$subsystem", 00:31:14.643 "trtype": "$TEST_TRANSPORT", 00:31:14.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.643 "adrfam": "ipv4", 00:31:14.643 "trsvcid": "$NVMF_PORT", 00:31:14.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.643 "hdgst": ${hdgst:-false}, 00:31:14.643 "ddgst": ${ddgst:-false} 00:31:14.643 }, 00:31:14.643 "method": "bdev_nvme_attach_controller" 00:31:14.643 } 00:31:14.643 EOF 00:31:14.643 )") 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:14.643 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:14.643 "params": { 00:31:14.643 "name": "Nvme1", 00:31:14.643 "trtype": "tcp", 00:31:14.643 "traddr": "10.0.0.2", 00:31:14.643 "adrfam": "ipv4", 00:31:14.643 "trsvcid": "4420", 00:31:14.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:14.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:14.643 "hdgst": false, 00:31:14.643 "ddgst": false 00:31:14.643 }, 00:31:14.643 "method": "bdev_nvme_attach_controller" 00:31:14.643 }' 00:31:14.643 [2024-12-06 11:32:47.480695] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:31:14.643 [2024-12-06 11:32:47.480745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1947866 ] 00:31:14.643 [2024-12-06 11:32:47.553367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.912 [2024-12-06 11:32:47.593270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.172 Running I/O for 10 seconds... 00:31:17.045 9265.00 IOPS, 72.38 MiB/s [2024-12-06T10:32:51.355Z] 9318.50 IOPS, 72.80 MiB/s [2024-12-06T10:32:51.919Z] 9316.67 IOPS, 72.79 MiB/s [2024-12-06T10:32:53.293Z] 9335.25 IOPS, 72.93 MiB/s [2024-12-06T10:32:54.231Z] 9354.20 IOPS, 73.08 MiB/s [2024-12-06T10:32:55.171Z] 9368.67 IOPS, 73.19 MiB/s [2024-12-06T10:32:56.105Z] 9371.57 IOPS, 73.22 MiB/s [2024-12-06T10:32:57.039Z] 9379.88 IOPS, 73.28 MiB/s [2024-12-06T10:32:57.976Z] 9383.33 IOPS, 73.31 MiB/s [2024-12-06T10:32:57.976Z] 9384.80 IOPS, 73.32 MiB/s 00:31:25.038 Latency(us) 00:31:25.038 [2024-12-06T10:32:57.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.038 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:25.038 Verification LBA range: start 0x0 length 0x1000 00:31:25.038 Nvme1n1 : 10.01 9388.39 73.35 0.00 0.00 13595.79 1705.43 19541.64 00:31:25.038 [2024-12-06T10:32:57.976Z] =================================================================================================================== 00:31:25.038 [2024-12-06T10:32:57.976Z] Total : 9388.39 73.35 0.00 0.00 13595.79 1705.43 19541.64 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1949705 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.298 { 00:31:25.298 "params": { 00:31:25.298 "name": "Nvme$subsystem", 00:31:25.298 "trtype": "$TEST_TRANSPORT", 00:31:25.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.298 "adrfam": "ipv4", 00:31:25.298 "trsvcid": "$NVMF_PORT", 00:31:25.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.298 "hdgst": ${hdgst:-false}, 00:31:25.298 "ddgst": ${ddgst:-false} 00:31:25.298 }, 00:31:25.298 "method": "bdev_nvme_attach_controller" 00:31:25.298 } 00:31:25.298 EOF 00:31:25.298 )") 00:31:25.298 [2024-12-06 11:32:58.100205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.100236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:25.298 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:25.298 "params": { 00:31:25.298 "name": "Nvme1", 00:31:25.298 "trtype": "tcp", 00:31:25.298 "traddr": "10.0.0.2", 00:31:25.298 "adrfam": "ipv4", 00:31:25.298 "trsvcid": "4420", 00:31:25.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.298 "hdgst": false, 00:31:25.298 "ddgst": false 00:31:25.298 }, 00:31:25.298 "method": "bdev_nvme_attach_controller" 00:31:25.298 }' 00:31:25.298 [2024-12-06 11:32:58.112172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.112184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 [2024-12-06 11:32:58.124172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.124182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 [2024-12-06 11:32:58.136169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.136177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 [2024-12-06 11:32:58.140279] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:31:25.298 [2024-12-06 11:32:58.140318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949705 ] 00:31:25.298 [2024-12-06 11:32:58.148169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.148179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 [2024-12-06 11:32:58.160169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.160178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 [2024-12-06 11:32:58.172172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.172182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 [2024-12-06 11:32:58.184168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.184177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 [2024-12-06 11:32:58.196169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.196178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 [2024-12-06 11:32:58.208168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.298 [2024-12-06 11:32:58.208177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.298 [2024-12-06 11:32:58.210154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.298 [2024-12-06 11:32:58.220172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.299 [2024-12-06 11:32:58.220185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.299 [2024-12-06 11:32:58.232170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.299 [2024-12-06 11:32:58.232181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.244171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.244182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.249288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.559 [2024-12-06 11:32:58.256177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.256191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.268179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.268196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.280175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.280189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.292172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.292183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.304173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.304183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.316174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.316185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.328170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.328180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.340180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.340200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.352176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.352189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.364175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.364188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.376173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.376184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.388171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.388180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.400172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.400185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.412174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.412188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.424232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.424246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.436180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.436196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 Running I/O for 5 seconds... 00:31:25.559 [2024-12-06 11:32:58.449397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.449416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.463823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.463841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.477611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.477629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.559 [2024-12-06 11:32:58.491304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.559 [2024-12-06 11:32:58.491322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.505030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.505048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.519842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.519864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.533599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.533617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.548272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.548289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.560638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.560655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.573900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.573917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.587255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.587273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.600700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.600717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.615278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.615296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.629035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.629053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.640794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.640810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.653205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.653223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.668065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.668083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.682198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.682216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.695975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.695993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.709705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.709723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.723555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.723573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.737618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.737635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.819 [2024-12-06 11:32:58.751879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.819 [2024-12-06 11:32:58.751897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.765715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.765733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.780027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.780050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.793603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.793620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.807491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.807509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.820944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.820961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.835413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.835430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.849135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.849152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.864502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.864519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.879518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.879536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.892895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.892912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.907019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.907036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.920861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.920878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.935382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.935399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.949252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.949269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.963103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.963120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.976772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.976789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:58.991242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:58.991259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.079 [2024-12-06 11:32:59.004885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.079 [2024-12-06 11:32:59.004902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.338 [2024-12-06 11:32:59.019681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.019698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.033374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.033391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.048342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.048363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.061494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.061511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.075519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.075537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.089229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.089247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.103819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.103837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.117941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.117965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.132318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.132335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.144921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.144937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.158032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.158048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.172263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.172280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.185303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.185320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.200211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.200232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.213832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.213849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.227512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.227529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.241352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.241369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.253301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.253318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.339 [2024-12-06 11:32:59.267887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.339 [2024-12-06 11:32:59.267904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.281708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.281726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.295292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.295309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.308848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.308865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.324065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.324082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.337455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.337472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.351900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.351917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.365433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.365450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.379516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.379534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.393088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.393106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.407623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.407641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.421693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.421711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.435802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.435819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 18067.00 IOPS, 141.15 MiB/s [2024-12-06T10:32:59.536Z] [2024-12-06 11:32:59.449079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.449096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.463239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.463256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.477073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.477105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.491504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.491521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.505109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.505126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.519954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.519971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.598 [2024-12-06 11:32:59.533079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.598 [2024-12-06 11:32:59.533096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.858 [2024-12-06 11:32:59.547651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.858 [2024-12-06 11:32:59.547668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.858 [2024-12-06 11:32:59.561080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.858 [2024-12-06 11:32:59.561113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.858 [2024-12-06 11:32:59.576242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.858 [2024-12-06 11:32:59.576259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.858 [2024-12-06 11:32:59.589372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.858 [2024-12-06 11:32:59.589389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.858 [2024-12-06 11:32:59.604113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.858 [2024-12-06 11:32:59.604130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.858 [2024-12-06 11:32:59.617960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.858 [2024-12-06 11:32:59.617977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.858 [2024-12-06 11:32:59.631618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.858 [2024-12-06 11:32:59.631635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.858 [2024-12-06 11:32:59.645483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.858 [2024-12-06 11:32:59.645500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.660249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.660271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.673995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.674012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.688297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.688315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.701459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.701475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.715907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.715923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.729524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.729544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.743652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.743669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.757324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.757340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.771879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.771896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.859 [2024-12-06 11:32:59.785388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.859 [2024-12-06 11:32:59.785405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.118 [2024-12-06 11:32:59.796341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.118 [2024-12-06 11:32:59.796358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.118 [2024-12-06 11:32:59.809601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.118 [2024-12-06 11:32:59.809617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.118 [2024-12-06 11:32:59.824202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.118 [2024-12-06 11:32:59.824218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.118 [2024-12-06 11:32:59.837838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.118 [2024-12-06 11:32:59.837855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.118 [2024-12-06 11:32:59.852195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.118 [2024-12-06 11:32:59.852214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.118 [2024-12-06 11:32:59.865770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.118 [2024-12-06 11:32:59.865788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.118 [2024-12-06 11:32:59.879985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:32:59.880003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:32:59.893579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:32:59.893596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:32:59.907476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:32:59.907493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:32:59.921762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:32:59.921780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:32:59.936135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:32:59.936153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:32:59.949472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:32:59.949490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:32:59.964248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:32:59.964265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:32:59.976613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:32:59.976630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:32:59.991021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:32:59.991039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:33:00.004851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:33:00.004869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:33:00.019889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:33:00.019909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:33:00.033404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:33:00.033422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.119 [2024-12-06 11:33:00.050165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.119 [2024-12-06 11:33:00.050188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.066509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.066532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.082601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.082624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.099551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.099579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.113044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.113069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.125543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.125562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.140098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.140118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.153273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.153292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.167804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.167822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.181593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.181611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.195829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.195847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.378 [2024-12-06 11:33:00.209659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.378 [2024-12-06 11:33:00.209676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.379 [2024-12-06 11:33:00.223915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.379 [2024-12-06 11:33:00.223933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.379 [2024-12-06 11:33:00.237353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.379 [2024-12-06 11:33:00.237370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.379 [2024-12-06 11:33:00.251927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.379 [2024-12-06 11:33:00.251945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.379 [2024-12-06 11:33:00.265179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.379 [2024-12-06 11:33:00.265196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.379 [2024-12-06 11:33:00.277822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.379 [2024-12-06 11:33:00.277840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.379 [2024-12-06 11:33:00.291618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.379 [2024-12-06 11:33:00.291636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.379 [2024-12-06 11:33:00.305031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.379 [2024-12-06 11:33:00.305049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.638 [2024-12-06 11:33:00.319676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.638 [2024-12-06 11:33:00.319694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.638 [2024-12-06 11:33:00.333644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.638 [2024-12-06 11:33:00.333664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.638 [2024-12-06 11:33:00.347694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.638 [2024-12-06 11:33:00.347712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.638 [2024-12-06 11:33:00.361974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.638 [2024-12-06 11:33:00.361997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.638 [2024-12-06 11:33:00.375839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.375858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.389816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.389834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.403756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.403779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.417285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.417303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.432021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.432039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.445944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.445962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 17929.50 IOPS, 140.07 MiB/s [2024-12-06T10:33:00.577Z] [2024-12-06 11:33:00.459779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.459797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.473079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.473097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.488037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.488055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.501917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.501937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.515980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.515998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.530017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.530035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.544585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.544603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.559562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.559579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.639 [2024-12-06 11:33:00.573453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.639 [2024-12-06 11:33:00.573472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.587850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.587867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.601540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.601558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.616135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.616154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.630066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.630088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.643300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.643317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.657158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.657175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.671660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.671679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.685391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.685408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.699564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.699581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.713204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.898 [2024-12-06 11:33:00.713222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.898 [2024-12-06 11:33:00.727554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.899 [2024-12-06 11:33:00.727571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.899 [2024-12-06 11:33:00.741381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.899 [2024-12-06 11:33:00.741398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.899 [2024-12-06 11:33:00.752221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.899 [2024-12-06 11:33:00.752238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.899 [2024-12-06 11:33:00.765478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.899 [2024-12-06 11:33:00.765495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.899 [2024-12-06 11:33:00.779942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.899 [2024-12-06 11:33:00.779959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.899 [2024-12-06 11:33:00.792994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.899 [2024-12-06 11:33:00.793010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.899 [2024-12-06 11:33:00.805492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.899 [2024-12-06 11:33:00.805508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.899 [2024-12-06 11:33:00.816232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.899 [2024-12-06 11:33:00.816249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.899 [2024-12-06 11:33:00.829163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.899 [2024-12-06 11:33:00.829179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.843548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.843565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.857295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.857323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.872027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.872044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.884797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.884813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.899530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.899546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.912717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.912734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.925519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.925535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.939123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.939140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.952390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.952407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.963540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.963556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.976901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.976917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:00.991743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:00.991761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:01.005538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:01.005555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:01.019756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:01.019773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:01.033427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:01.033444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:01.044722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:01.044739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:01.057529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:01.057547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.158 [2024-12-06 11:33:01.072470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.158 [2024-12-06 11:33:01.072486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.159 [2024-12-06 11:33:01.084810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.159 [2024-12-06 11:33:01.084827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.097767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.097783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.112074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.112091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.125548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.125565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.139529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.139547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.152899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.152916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.167335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.167352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.180727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.180744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.195889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.195911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.209815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.209832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.224633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.224649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.239869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.239886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.253465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.253481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.264267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.264283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.278467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.278484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.291815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.291832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.305231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.305248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.319796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.319814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.333524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.333557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.419 [2024-12-06 11:33:01.347879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.419 [2024-12-06 11:33:01.347896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.361486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.361503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.375737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.375755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.390025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.390043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.404316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.404334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.417109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.417128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.431527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.431546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.445187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.445205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 17992.00 IOPS, 140.56 MiB/s [2024-12-06T10:33:01.617Z] [2024-12-06 11:33:01.459907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.459924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.473663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.473680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.487737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.487754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.501781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.501799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.515998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.516015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.529535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.529554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.543782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.543800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.557500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.557517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.571419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.571437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.585146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.585164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.600183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.600201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.679 [2024-12-06 11:33:01.612803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.679 [2024-12-06 11:33:01.612821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.627207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.627225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.640684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.640701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.655643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.655664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.669607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.669625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.684610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.684628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.699687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.699705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.713177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.713195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.728204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.728221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.741384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.741401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.752496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.752512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.765455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.765473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.779433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.779451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.793082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.793098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.807910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.807928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.821616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.821632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.836590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.836607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.849146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.849164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.939 [2024-12-06 11:33:01.863559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.939 [2024-12-06 11:33:01.863577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:01.877633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:01.877651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:01.892183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:01.892201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:01.905185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:01.905203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:01.916762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:01.916783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:01.932016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:01.932033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:01.945758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:01.945774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:01.959910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:01.959927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:01.973340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:01.973357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:01.987949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:01.987967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.001304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.001320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.016055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.016077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.029445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.029462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.043808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.043825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.057995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.058012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.072512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.072528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.087702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.087719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.101434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.101451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.115902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.115920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.199 [2024-12-06 11:33:02.129736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.199 [2024-12-06 11:33:02.129753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.143278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.143296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.156633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.156649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.171248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.171266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.185418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.185444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.200285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.200303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.212742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.212759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.225352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.225368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.239492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.239510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.253266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.253282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.267662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.267680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.281632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.281650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.295186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.295204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.308776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.308793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.324397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.324414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.336129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.336147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.349853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.349870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.363918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.363935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.377613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.377630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.459 [2024-12-06 11:33:02.392182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.459 [2024-12-06 11:33:02.392200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.719 [2024-12-06 11:33:02.405748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.719 [2024-12-06 11:33:02.405765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.719 [2024-12-06 11:33:02.420126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.420143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.433809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.433825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.447492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.447512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 18030.00 IOPS, 140.86 MiB/s [2024-12-06T10:33:02.658Z] [2024-12-06 11:33:02.460650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.460666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.475142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.475159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.489114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.489131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.503897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.503914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.517369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.517386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.531920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.531937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.545378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.545394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.559279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.559295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.573172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.573189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.587707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.587725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.601214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.601231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.615729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.615746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.629641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.629657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.720 [2024-12-06 11:33:02.643982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.720 [2024-12-06 11:33:02.643999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.657622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.657639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.671577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.671594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.685535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.685553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.700071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.700088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.713297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.713313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.728326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.728342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.740883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.740899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.754918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.754934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.768291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.768318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.780972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.780988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.795830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.795847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.809327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.809344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.823333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.823350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.836875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.836892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.852195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.852212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.865554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.865571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.880210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.880227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.893387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.893403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.980 [2024-12-06 11:33:02.907905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.980 [2024-12-06 11:33:02.907922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:02.921846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:02.921865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:02.936188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:02.936205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:02.949806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:02.949824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:02.963748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:02.963767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:02.977367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:02.977385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:02.991846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:02.991864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.005839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.005857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.020211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.020228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.033012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.033029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.047872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.047889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.061690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.061707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.075651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.075668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.089588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.089605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.103358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.103375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.116877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.116894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.131526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.131544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.145068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.145086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.159678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.159695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.240 [2024-12-06 11:33:03.173328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.240 [2024-12-06 11:33:03.173345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.188082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.188100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.201647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.201664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.215917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.215936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.229439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.229460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.243486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.243503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.256868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.256885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.272248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.272265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.285620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.285638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.299572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.299590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.313220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.500 [2024-12-06 11:33:03.313237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.500 [2024-12-06 11:33:03.327614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.501 [2024-12-06 11:33:03.327637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.501 [2024-12-06 11:33:03.341595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.501 [2024-12-06 11:33:03.341612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.501 [2024-12-06 11:33:03.356073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.501 [2024-12-06 11:33:03.356105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.501 [2024-12-06 11:33:03.369485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.501 [2024-12-06 11:33:03.369502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.501 [2024-12-06 11:33:03.383510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.501 [2024-12-06 11:33:03.383528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.501 [2024-12-06 11:33:03.397113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.501 [2024-12-06 11:33:03.397130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.501 [2024-12-06 11:33:03.411303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.501 [2024-12-06 11:33:03.411321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.501 [2024-12-06 11:33:03.425376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.501 [2024-12-06 11:33:03.425393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.760 [2024-12-06 11:33:03.440307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.760 [2024-12-06 11:33:03.440325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.760 [2024-12-06 11:33:03.453560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.760 [2024-12-06 11:33:03.453577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.760 18078.40 IOPS, 141.24 MiB/s 00:31:30.760 Latency(us) 00:31:30.760 [2024-12-06T10:33:03.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.761 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:30.761 Nvme1n1 : 5.01 18079.06 141.24 0.00 0.00 7073.87 1921.40 14775.39 00:31:30.761 [2024-12-06T10:33:03.699Z] =================================================================================================================== 00:31:30.761 [2024-12-06T10:33:03.699Z] Total : 18079.06 141.24 0.00 0.00 7073.87 1921.40 14775.39 00:31:30.761 [2024-12-06 11:33:03.464178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.464193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.476173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.476186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.488184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.488203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.500177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.500193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.512177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.512191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.524172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.524184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.536172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.536183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.548172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.548185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.560171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.560183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.572170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.572179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.584169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.584178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.596172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.596182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 [2024-12-06 11:33:03.608168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.761 [2024-12-06 11:33:03.608175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1949705) - No such process 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1949705 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.761 delay0 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.761 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:31.021 [2024-12-06 11:33:03.796182] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:37.589 Initializing NVMe Controllers 00:31:37.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.590 Initialization complete. Launching workers. 00:31:37.590 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 18130 00:31:37.590 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18270, failed to submit 125 00:31:37.590 success 18192, unsuccessful 78, failed 0 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.590 rmmod nvme_tcp 00:31:37.590 rmmod nvme_fabrics 00:31:37.590 rmmod nvme_keyring 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1947753 ']' 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1947753 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1947753 ']' 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1947753 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1947753 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1947753' 00:31:37.590 killing process with pid 1947753 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1947753 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1947753 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.590 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.497 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.497 00:31:39.497 real 0m32.057s 00:31:39.497 user 0m40.256s 00:31:39.497 sys 0m12.967s 00:31:39.497 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:39.497 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.497 ************************************ 00:31:39.497 END TEST nvmf_zcopy 00:31:39.497 ************************************ 00:31:39.497 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:39.497 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:39.497 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:39.497 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:39.757 ************************************ 00:31:39.757 START TEST nvmf_nmic 00:31:39.757 ************************************ 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:39.757 * Looking for test storage... 00:31:39.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:39.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.757 --rc genhtml_branch_coverage=1 00:31:39.757 --rc genhtml_function_coverage=1 00:31:39.757 --rc genhtml_legend=1 00:31:39.757 --rc geninfo_all_blocks=1 00:31:39.757 --rc geninfo_unexecuted_blocks=1 00:31:39.757 00:31:39.757 ' 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:39.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.757 --rc genhtml_branch_coverage=1 00:31:39.757 --rc genhtml_function_coverage=1 00:31:39.757 --rc genhtml_legend=1 00:31:39.757 --rc geninfo_all_blocks=1 00:31:39.757 --rc geninfo_unexecuted_blocks=1 00:31:39.757 00:31:39.757 ' 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:39.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.757 --rc genhtml_branch_coverage=1 00:31:39.757 --rc genhtml_function_coverage=1 00:31:39.757 --rc genhtml_legend=1 00:31:39.757 --rc geninfo_all_blocks=1 00:31:39.757 --rc geninfo_unexecuted_blocks=1 00:31:39.757 00:31:39.757 ' 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:39.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.757 --rc genhtml_branch_coverage=1 00:31:39.757 --rc genhtml_function_coverage=1 00:31:39.757 --rc genhtml_legend=1 00:31:39.757 --rc geninfo_all_blocks=1 00:31:39.757 --rc geninfo_unexecuted_blocks=1 00:31:39.757 00:31:39.757 ' 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.757 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.758 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:46.328 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:46.328 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.328 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:46.329 Found net devices under 0000:af:00.0: cvl_0_0 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:46.329 Found net devices under 0000:af:00.1: cvl_0_1 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:31:46.329 00:31:46.329 --- 10.0.0.2 ping statistics --- 00:31:46.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.329 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:31:46.329 00:31:46.329 --- 10.0.0.1 ping statistics --- 00:31:46.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.329 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1955892 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1955892 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1955892 ']' 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.329 11:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.329 [2024-12-06 11:33:18.631330] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:46.329 [2024-12-06 11:33:18.632270] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:31:46.329 [2024-12-06 11:33:18.632309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.330 [2024-12-06 11:33:18.710697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.330 [2024-12-06 11:33:18.751030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.330 [2024-12-06 11:33:18.751076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.330 [2024-12-06 11:33:18.751083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.330 [2024-12-06 11:33:18.751089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.330 [2024-12-06 11:33:18.751094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.330 [2024-12-06 11:33:18.752658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.330 [2024-12-06 11:33:18.752788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.330 [2024-12-06 11:33:18.752883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.330 [2024-12-06 11:33:18.752884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:46.330 [2024-12-06 11:33:18.820354] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:46.330 [2024-12-06 11:33:18.820547] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:46.330 [2024-12-06 11:33:18.821088] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:46.330 [2024-12-06 11:33:18.821299] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:46.330 [2024-12-06 11:33:18.821351] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.590 [2024-12-06 11:33:19.493632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.590 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 Malloc0 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 [2024-12-06 11:33:19.569681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:46.851 test case1: single bdev can't be used in multiple subsystems 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 [2024-12-06 11:33:19.605310] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:46.851 [2024-12-06 11:33:19.605328] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:46.851 [2024-12-06 11:33:19.605335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.851 request: 00:31:46.851 { 00:31:46.851 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:46.851 "namespace": { 00:31:46.851 "bdev_name": "Malloc0", 00:31:46.851 "no_auto_visible": false, 00:31:46.851 "hide_metadata": false 00:31:46.851 }, 00:31:46.851 "method": "nvmf_subsystem_add_ns", 00:31:46.851 "req_id": 1 00:31:46.851 } 00:31:46.851 Got JSON-RPC error response 00:31:46.851 response: 00:31:46.851 { 00:31:46.851 "code": -32602, 00:31:46.851 "message": "Invalid parameters" 00:31:46.851 } 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:46.851 Adding namespace failed - expected result. 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:46.851 test case2: host connect to nvmf target in multiple paths 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.851 [2024-12-06 11:33:19.617404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.851 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:47.111 11:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:47.370 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:47.370 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:47.370 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:47.370 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:47.370 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:49.273 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:49.273 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:49.273 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:49.273 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:49.273 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:49.273 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:49.273 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:49.273 [global] 00:31:49.273 thread=1 00:31:49.273 invalidate=1 00:31:49.273 rw=write 00:31:49.273 time_based=1 00:31:49.273 runtime=1 00:31:49.273 ioengine=libaio 00:31:49.273 direct=1 00:31:49.273 bs=4096 00:31:49.273 iodepth=1 00:31:49.273 norandommap=0 00:31:49.273 numjobs=1 00:31:49.273 00:31:49.273 verify_dump=1 00:31:49.273 verify_backlog=512 00:31:49.273 verify_state_save=0 00:31:49.273 do_verify=1 00:31:49.273 verify=crc32c-intel 00:31:49.273 [job0] 00:31:49.273 filename=/dev/nvme0n1 00:31:49.273 Could not set queue depth (nvme0n1) 00:31:49.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:49.531 fio-3.35 00:31:49.531 Starting 1 thread 00:31:50.909 00:31:50.909 job0: (groupid=0, jobs=1): err= 0: pid=1956723: Fri Dec 6 11:33:23 2024 00:31:50.909 read: IOPS=2467, BW=9870KiB/s (10.1MB/s)(9880KiB/1001msec) 00:31:50.909 slat (nsec): min=6520, max=26783, avg=7389.04, stdev=858.39 00:31:50.909 clat (usec): min=187, max=383, avg=236.77, stdev=21.88 00:31:50.909 lat (usec): min=194, max=391, avg=244.16, stdev=21.89 00:31:50.909 clat percentiles (usec): 00:31:50.909 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 204], 00:31:50.909 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 247], 00:31:50.909 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 253], 95.00th=[ 255], 00:31:50.909 | 99.00th=[ 260], 99.50th=[ 262], 99.90th=[ 310], 99.95th=[ 371], 00:31:50.909 | 99.99th=[ 383] 00:31:50.909 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:50.909 slat (usec): min=9, max=27106, avg=21.19, stdev=535.54 00:31:50.909 clat (usec): min=117, max=317, avg=129.53, stdev= 8.67 00:31:50.909 lat (usec): min=127, max=27379, avg=150.72, stdev=538.44 00:31:50.909 clat percentiles (usec): 00:31:50.909 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 124], 20.00th=[ 126], 00:31:50.909 | 30.00th=[ 127], 40.00th=[ 128], 50.00th=[ 129], 60.00th=[ 130], 00:31:50.909 | 70.00th=[ 131], 80.00th=[ 133], 90.00th=[ 135], 95.00th=[ 139], 00:31:50.909 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 194], 99.95th=[ 273], 00:31:50.909 | 99.99th=[ 318] 00:31:50.909 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:50.909 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:50.909 lat (usec) : 250=89.90%, 500=10.10% 00:31:50.909 cpu : usr=2.30%, sys=4.80%, ctx=5033, majf=0, minf=1 00:31:50.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.909 issued rwts: total=2470,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.909 00:31:50.909 Run status group 0 (all jobs): 00:31:50.909 READ: bw=9870KiB/s (10.1MB/s), 9870KiB/s-9870KiB/s (10.1MB/s-10.1MB/s), io=9880KiB (10.1MB), run=1001-1001msec 00:31:50.909 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:31:50.909 00:31:50.909 Disk stats (read/write): 00:31:50.909 nvme0n1: ios=2104/2560, merge=0/0, ticks=1469/316, in_queue=1785, util=98.50% 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:50.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:50.909 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.910 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:50.910 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.910 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:50.910 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.910 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.910 rmmod nvme_tcp 00:31:50.910 rmmod nvme_fabrics 00:31:50.910 rmmod nvme_keyring 00:31:50.910 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1955892 ']' 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1955892 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1955892 ']' 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1955892 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1955892 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1955892' 00:31:51.169 killing process with pid 1955892 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1955892 00:31:51.169 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1955892 00:31:51.169 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:51.169 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:51.169 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:51.169 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:51.169 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:51.169 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:51.170 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:51.170 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.170 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.170 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.170 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.170 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:53.704 00:31:53.704 real 0m13.710s 00:31:53.704 user 0m27.296s 00:31:53.704 sys 0m6.122s 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.704 ************************************ 00:31:53.704 END TEST nvmf_nmic 00:31:53.704 ************************************ 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:53.704 ************************************ 00:31:53.704 START TEST nvmf_fio_target 00:31:53.704 ************************************ 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:53.704 * Looking for test storage... 00:31:53.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:53.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.704 --rc genhtml_branch_coverage=1 00:31:53.704 --rc genhtml_function_coverage=1 00:31:53.704 --rc genhtml_legend=1 00:31:53.704 --rc geninfo_all_blocks=1 00:31:53.704 --rc geninfo_unexecuted_blocks=1 00:31:53.704 00:31:53.704 ' 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:53.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.704 --rc genhtml_branch_coverage=1 00:31:53.704 --rc genhtml_function_coverage=1 00:31:53.704 --rc genhtml_legend=1 00:31:53.704 --rc geninfo_all_blocks=1 00:31:53.704 --rc geninfo_unexecuted_blocks=1 00:31:53.704 00:31:53.704 ' 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:53.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.704 --rc genhtml_branch_coverage=1 00:31:53.704 --rc genhtml_function_coverage=1 00:31:53.704 --rc genhtml_legend=1 00:31:53.704 --rc geninfo_all_blocks=1 00:31:53.704 --rc geninfo_unexecuted_blocks=1 00:31:53.704 00:31:53.704 ' 00:31:53.704 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:53.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.704 --rc genhtml_branch_coverage=1 00:31:53.704 --rc genhtml_function_coverage=1 00:31:53.704 --rc genhtml_legend=1 00:31:53.704 --rc geninfo_all_blocks=1 00:31:53.704 --rc geninfo_unexecuted_blocks=1 00:31:53.704 00:31:53.705 ' 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:53.705 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.276 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:00.277 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:00.277 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:00.277 Found net devices under 0000:af:00.0: cvl_0_0 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:00.277 Found net devices under 0000:af:00.1: cvl_0_1 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:32:00.277 00:32:00.277 --- 10.0.0.2 ping statistics --- 00:32:00.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.277 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:32:00.277 00:32:00.277 --- 10.0.0.1 ping statistics --- 00:32:00.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.277 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.277 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1960528 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1960528 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1960528 ']' 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.278 11:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.278 [2024-12-06 11:33:32.441956] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.278 [2024-12-06 11:33:32.442821] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:32:00.278 [2024-12-06 11:33:32.442855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.278 [2024-12-06 11:33:32.519452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.278 [2024-12-06 11:33:32.557322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.278 [2024-12-06 11:33:32.557359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.278 [2024-12-06 11:33:32.557365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.278 [2024-12-06 11:33:32.557371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.278 [2024-12-06 11:33:32.557375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.278 [2024-12-06 11:33:32.558930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.278 [2024-12-06 11:33:32.559045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.278 [2024-12-06 11:33:32.559158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.278 [2024-12-06 11:33:32.559159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.278 [2024-12-06 11:33:32.626610] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.278 [2024-12-06 11:33:32.626891] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.278 [2024-12-06 11:33:32.627380] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:00.278 [2024-12-06 11:33:32.627549] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:00.278 [2024-12-06 11:33:32.627596] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:00.536 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.536 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:00.536 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.536 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.536 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.536 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.536 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.536 [2024-12-06 11:33:33.443931] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.795 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.795 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:00.795 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.054 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:01.054 11:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.313 11:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:01.313 11:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.313 11:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:01.313 11:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:01.571 11:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.830 11:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:01.830 11:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:02.093 11:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:02.093 11:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:02.093 11:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:02.093 11:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:02.351 11:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:02.610 11:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:02.610 11:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:02.867 11:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:02.867 11:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:02.867 11:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.124 [2024-12-06 11:33:35.911826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.124 11:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:03.382 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:03.382 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:03.947 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:03.947 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:03.947 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:03.947 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:03.947 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:03.947 11:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:05.851 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:05.851 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:05.851 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:05.851 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:05.851 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:05.851 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:05.851 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:05.851 [global] 00:32:05.851 thread=1 00:32:05.851 invalidate=1 00:32:05.851 rw=write 00:32:05.851 time_based=1 00:32:05.851 runtime=1 00:32:05.851 ioengine=libaio 00:32:05.851 direct=1 00:32:05.851 bs=4096 00:32:05.851 iodepth=1 00:32:05.851 norandommap=0 00:32:05.851 numjobs=1 00:32:05.851 00:32:05.851 verify_dump=1 00:32:05.851 verify_backlog=512 00:32:05.851 verify_state_save=0 00:32:05.851 do_verify=1 00:32:05.851 verify=crc32c-intel 00:32:05.851 [job0] 00:32:05.851 filename=/dev/nvme0n1 00:32:05.851 [job1] 00:32:05.851 filename=/dev/nvme0n2 00:32:05.851 [job2] 00:32:05.851 filename=/dev/nvme0n3 00:32:05.851 [job3] 00:32:05.851 filename=/dev/nvme0n4 00:32:05.851 Could not set queue depth (nvme0n1) 00:32:05.851 Could not set queue depth (nvme0n2) 00:32:05.851 Could not set queue depth (nvme0n3) 00:32:05.851 Could not set queue depth (nvme0n4) 00:32:06.111 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.111 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.111 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.111 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.111 fio-3.35 00:32:06.111 Starting 4 threads 00:32:07.488 00:32:07.488 job0: (groupid=0, jobs=1): err= 0: pid=1961995: Fri Dec 6 11:33:40 2024 00:32:07.488 read: IOPS=55, BW=222KiB/s (227kB/s)(224KiB/1009msec) 00:32:07.488 slat (nsec): min=6864, max=26364, avg=13905.45, stdev=7667.70 00:32:07.488 clat (usec): min=202, max=41966, avg=16292.79, stdev=20082.12 00:32:07.488 lat (usec): min=209, max=41989, avg=16306.69, stdev=20089.15 00:32:07.488 clat percentiles (usec): 00:32:07.488 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 251], 00:32:07.489 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 314], 60.00th=[ 685], 00:32:07.489 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:07.489 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:07.489 | 99.99th=[42206] 00:32:07.489 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:32:07.489 slat (nsec): min=9413, max=40030, avg=10397.45, stdev=1773.19 00:32:07.489 clat (usec): min=137, max=268, avg=174.26, stdev=16.87 00:32:07.489 lat (usec): min=147, max=308, avg=184.65, stdev=17.31 00:32:07.489 clat percentiles (usec): 00:32:07.489 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 163], 00:32:07.489 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:32:07.489 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:32:07.489 | 99.00th=[ 227], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 269], 00:32:07.489 | 99.99th=[ 269] 00:32:07.489 bw ( KiB/s): min= 4096, max= 4096, per=13.56%, avg=4096.00, stdev= 0.00, samples=1 00:32:07.489 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:07.489 lat (usec) : 250=91.55%, 500=4.40%, 750=0.18% 00:32:07.489 lat (msec) : 50=3.87% 00:32:07.489 cpu : usr=0.30%, sys=0.60%, ctx=568, majf=0, minf=1 00:32:07.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.489 issued rwts: total=56,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:07.489 job1: (groupid=0, jobs=1): err= 0: pid=1961996: Fri Dec 6 11:33:40 2024 00:32:07.489 read: IOPS=2200, BW=8803KiB/s (9014kB/s)(8812KiB/1001msec) 00:32:07.489 slat (nsec): min=7396, max=23077, avg=8843.95, stdev=1291.58 00:32:07.489 clat (usec): min=186, max=3731, avg=232.57, stdev=83.31 00:32:07.489 lat (usec): min=194, max=3739, avg=241.42, stdev=83.41 00:32:07.489 clat percentiles (usec): 00:32:07.489 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:32:07.489 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:32:07.489 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 269], 95.00th=[ 277], 00:32:07.489 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 1004], 99.95th=[ 1434], 00:32:07.489 | 99.99th=[ 3720] 00:32:07.489 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:07.489 slat (nsec): min=11083, max=39444, avg=12722.08, stdev=1654.38 00:32:07.489 clat (usec): min=123, max=402, avg=164.08, stdev=23.88 00:32:07.489 lat (usec): min=135, max=420, avg=176.80, stdev=24.20 00:32:07.489 clat percentiles (usec): 00:32:07.489 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:32:07.489 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:32:07.489 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 221], 00:32:07.489 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 355], 99.95th=[ 396], 00:32:07.489 | 99.99th=[ 404] 00:32:07.489 bw ( KiB/s): min=10480, max=10480, per=34.69%, avg=10480.00, stdev= 0.00, samples=1 00:32:07.489 iops : min= 2620, max= 2620, avg=2620.00, stdev= 0.00, samples=1 00:32:07.489 lat (usec) : 250=92.86%, 500=7.08% 00:32:07.489 lat (msec) : 2=0.04%, 4=0.02% 00:32:07.489 cpu : usr=4.20%, sys=8.00%, ctx=4764, majf=0, minf=1 00:32:07.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.489 issued rwts: total=2203,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:07.489 job2: (groupid=0, jobs=1): err= 0: pid=1961997: Fri Dec 6 11:33:40 2024 00:32:07.489 read: IOPS=1967, BW=7870KiB/s (8059kB/s)(8004KiB/1017msec) 00:32:07.489 slat (nsec): min=7517, max=40629, avg=8728.25, stdev=1702.97 00:32:07.489 clat (usec): min=198, max=40982, avg=308.28, stdev=1284.73 00:32:07.489 lat (usec): min=207, max=40997, avg=317.01, stdev=1284.89 00:32:07.489 clat percentiles (usec): 00:32:07.489 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:32:07.489 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 255], 00:32:07.489 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 363], 95.00th=[ 433], 00:32:07.489 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 1893], 99.95th=[40633], 00:32:07.489 | 99.99th=[41157] 00:32:07.489 write: IOPS=2013, BW=8055KiB/s (8248kB/s)(8192KiB/1017msec); 0 zone resets 00:32:07.489 slat (nsec): min=11191, max=79597, avg=12873.22, stdev=2784.27 00:32:07.489 clat (usec): min=131, max=275, avg=167.55, stdev=22.44 00:32:07.489 lat (usec): min=142, max=302, avg=180.42, stdev=22.72 00:32:07.489 clat percentiles (usec): 00:32:07.489 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:32:07.489 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:32:07.489 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 204], 95.00th=[ 217], 00:32:07.489 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 260], 99.95th=[ 260], 00:32:07.489 | 99.99th=[ 277] 00:32:07.489 bw ( KiB/s): min= 8192, max= 8192, per=27.12%, avg=8192.00, stdev= 0.00, samples=2 00:32:07.489 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:32:07.489 lat (usec) : 250=77.53%, 500=21.96%, 750=0.42% 00:32:07.489 lat (msec) : 2=0.05%, 50=0.05% 00:32:07.489 cpu : usr=4.23%, sys=5.81%, ctx=4050, majf=0, minf=1 00:32:07.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.489 issued rwts: total=2001,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:07.489 job3: (groupid=0, jobs=1): err= 0: pid=1961998: Fri Dec 6 11:33:40 2024 00:32:07.489 read: IOPS=2087, BW=8352KiB/s (8552kB/s)(8360KiB/1001msec) 00:32:07.489 slat (nsec): min=7549, max=50810, avg=8765.74, stdev=1847.02 00:32:07.489 clat (usec): min=202, max=992, avg=244.73, stdev=34.82 00:32:07.489 lat (usec): min=212, max=1002, avg=253.50, stdev=34.92 00:32:07.489 clat percentiles (usec): 00:32:07.489 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:32:07.489 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:32:07.489 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 262], 00:32:07.489 | 99.00th=[ 343], 99.50th=[ 469], 99.90th=[ 676], 99.95th=[ 963], 00:32:07.489 | 99.99th=[ 996] 00:32:07.489 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:07.489 slat (nsec): min=9591, max=36202, avg=12336.68, stdev=1634.87 00:32:07.489 clat (usec): min=133, max=1219, avg=166.02, stdev=27.27 00:32:07.489 lat (usec): min=145, max=1231, avg=178.36, stdev=27.44 00:32:07.489 clat percentiles (usec): 00:32:07.489 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:32:07.489 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:32:07.489 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 208], 00:32:07.489 | 99.00th=[ 227], 99.50th=[ 237], 99.90th=[ 265], 99.95th=[ 326], 00:32:07.489 | 99.99th=[ 1221] 00:32:07.489 bw ( KiB/s): min=10112, max=10112, per=33.48%, avg=10112.00, stdev= 0.00, samples=1 00:32:07.489 iops : min= 2528, max= 2528, avg=2528.00, stdev= 0.00, samples=1 00:32:07.489 lat (usec) : 250=91.01%, 500=8.82%, 750=0.11%, 1000=0.04% 00:32:07.489 lat (msec) : 2=0.02% 00:32:07.489 cpu : usr=3.80%, sys=7.70%, ctx=4652, majf=0, minf=1 00:32:07.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.489 issued rwts: total=2090,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:07.489 00:32:07.489 Run status group 0 (all jobs): 00:32:07.489 READ: bw=24.4MiB/s (25.6MB/s), 222KiB/s-8803KiB/s (227kB/s-9014kB/s), io=24.8MiB (26.0MB), run=1001-1017msec 00:32:07.489 WRITE: bw=29.5MiB/s (30.9MB/s), 2030KiB/s-9.99MiB/s (2078kB/s-10.5MB/s), io=30.0MiB (31.5MB), run=1001-1017msec 00:32:07.489 00:32:07.489 Disk stats (read/write): 00:32:07.489 nvme0n1: ios=102/512, merge=0/0, ticks=773/90, in_queue=863, util=87.07% 00:32:07.489 nvme0n2: ios=2007/2048, merge=0/0, ticks=1333/314, in_queue=1647, util=89.85% 00:32:07.489 nvme0n3: ios=1778/2048, merge=0/0, ticks=1331/328, in_queue=1659, util=93.55% 00:32:07.489 nvme0n4: ios=1908/2048, merge=0/0, ticks=1340/323, in_queue=1663, util=94.23% 00:32:07.489 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:07.489 [global] 00:32:07.489 thread=1 00:32:07.489 invalidate=1 00:32:07.489 rw=randwrite 00:32:07.489 time_based=1 00:32:07.489 runtime=1 00:32:07.489 ioengine=libaio 00:32:07.489 direct=1 00:32:07.489 bs=4096 00:32:07.489 iodepth=1 00:32:07.489 norandommap=0 00:32:07.489 numjobs=1 00:32:07.489 00:32:07.489 verify_dump=1 00:32:07.489 verify_backlog=512 00:32:07.489 verify_state_save=0 00:32:07.489 do_verify=1 00:32:07.489 verify=crc32c-intel 00:32:07.489 [job0] 00:32:07.489 filename=/dev/nvme0n1 00:32:07.489 [job1] 00:32:07.489 filename=/dev/nvme0n2 00:32:07.489 [job2] 00:32:07.489 filename=/dev/nvme0n3 00:32:07.489 [job3] 00:32:07.489 filename=/dev/nvme0n4 00:32:07.489 Could not set queue depth (nvme0n1) 00:32:07.489 Could not set queue depth (nvme0n2) 00:32:07.489 Could not set queue depth (nvme0n3) 00:32:07.489 Could not set queue depth (nvme0n4) 00:32:07.747 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.747 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.747 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.747 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.747 fio-3.35 00:32:07.747 Starting 4 threads 00:32:09.251 00:32:09.251 job0: (groupid=0, jobs=1): err= 0: pid=1962414: Fri Dec 6 11:33:41 2024 00:32:09.251 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:32:09.251 slat (nsec): min=9282, max=23012, avg=21632.74, stdev=2737.43 00:32:09.251 clat (usec): min=40696, max=41111, avg=40949.30, stdev=97.67 00:32:09.251 lat (usec): min=40706, max=41133, avg=40970.93, stdev=99.07 00:32:09.251 clat percentiles (usec): 00:32:09.251 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:32:09.251 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:09.251 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:09.251 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:09.251 | 99.99th=[41157] 00:32:09.251 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:32:09.251 slat (nsec): min=8044, max=33710, avg=10022.35, stdev=1427.51 00:32:09.251 clat (usec): min=134, max=298, avg=156.58, stdev=11.83 00:32:09.251 lat (usec): min=142, max=332, avg=166.60, stdev=12.49 00:32:09.251 clat percentiles (usec): 00:32:09.251 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:32:09.251 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:32:09.251 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:32:09.251 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 297], 99.95th=[ 297], 00:32:09.251 | 99.99th=[ 297] 00:32:09.251 bw ( KiB/s): min= 4096, max= 4096, per=34.30%, avg=4096.00, stdev= 0.00, samples=1 00:32:09.251 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:09.251 lat (usec) : 250=95.51%, 500=0.19% 00:32:09.251 lat (msec) : 50=4.30% 00:32:09.251 cpu : usr=0.29%, sys=0.39%, ctx=535, majf=0, minf=1 00:32:09.251 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.251 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.251 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:09.251 job1: (groupid=0, jobs=1): err= 0: pid=1962415: Fri Dec 6 11:33:41 2024 00:32:09.251 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:32:09.251 slat (nsec): min=9678, max=24524, avg=22711.41, stdev=3055.88 00:32:09.251 clat (usec): min=40694, max=41032, avg=40952.65, stdev=73.12 00:32:09.251 lat (usec): min=40718, max=41054, avg=40975.36, stdev=73.99 00:32:09.251 clat percentiles (usec): 00:32:09.251 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:09.251 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:09.251 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:09.251 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:09.251 | 99.99th=[41157] 00:32:09.251 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:32:09.251 slat (nsec): min=10626, max=37757, avg=11790.54, stdev=1550.12 00:32:09.251 clat (usec): min=122, max=266, avg=182.30, stdev=41.02 00:32:09.251 lat (usec): min=133, max=296, avg=194.09, stdev=41.18 00:32:09.251 clat percentiles (usec): 00:32:09.251 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 147], 20.00th=[ 151], 00:32:09.251 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:32:09.251 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 241], 95.00th=[ 243], 00:32:09.251 | 99.00th=[ 255], 99.50th=[ 260], 99.90th=[ 269], 99.95th=[ 269], 00:32:09.251 | 99.99th=[ 269] 00:32:09.251 bw ( KiB/s): min= 4096, max= 4096, per=34.30%, avg=4096.00, stdev= 0.00, samples=1 00:32:09.251 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:09.251 lat (usec) : 250=93.07%, 500=2.81% 00:32:09.251 lat (msec) : 50=4.12% 00:32:09.251 cpu : usr=0.70%, sys=0.70%, ctx=535, majf=0, minf=1 00:32:09.251 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.251 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.251 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:09.251 job2: (groupid=0, jobs=1): err= 0: pid=1962416: Fri Dec 6 11:33:41 2024 00:32:09.251 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:32:09.251 slat (nsec): min=9989, max=25067, avg=23266.14, stdev=3097.35 00:32:09.251 clat (usec): min=40824, max=43151, avg=41061.25, stdev=469.68 00:32:09.252 lat (usec): min=40849, max=43175, avg=41084.51, stdev=469.69 00:32:09.252 clat percentiles (usec): 00:32:09.252 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:09.252 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:09.252 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:09.252 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:32:09.252 | 99.99th=[43254] 00:32:09.252 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:32:09.252 slat (nsec): min=10825, max=76864, avg=12373.54, stdev=3562.95 00:32:09.252 clat (usec): min=146, max=342, avg=173.41, stdev=24.71 00:32:09.252 lat (usec): min=157, max=355, avg=185.78, stdev=25.78 00:32:09.252 clat percentiles (usec): 00:32:09.252 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:32:09.252 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:32:09.252 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 202], 95.00th=[ 239], 00:32:09.252 | 99.00th=[ 249], 99.50th=[ 277], 99.90th=[ 343], 99.95th=[ 343], 00:32:09.252 | 99.99th=[ 343] 00:32:09.252 bw ( KiB/s): min= 4096, max= 4096, per=34.30%, avg=4096.00, stdev= 0.00, samples=1 00:32:09.252 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:09.252 lat (usec) : 250=94.94%, 500=0.94% 00:32:09.252 lat (msec) : 50=4.12% 00:32:09.252 cpu : usr=0.20%, sys=1.20%, ctx=537, majf=0, minf=1 00:32:09.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.252 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:09.252 job3: (groupid=0, jobs=1): err= 0: pid=1962417: Fri Dec 6 11:33:41 2024 00:32:09.252 read: IOPS=1415, BW=5663KiB/s (5799kB/s)(5720KiB/1010msec) 00:32:09.252 slat (nsec): min=6544, max=42476, avg=7660.48, stdev=1923.30 00:32:09.252 clat (usec): min=176, max=41190, avg=528.44, stdev=3562.95 00:32:09.252 lat (usec): min=190, max=41197, avg=536.11, stdev=3564.03 00:32:09.252 clat percentiles (usec): 00:32:09.252 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:32:09.252 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 208], 00:32:09.252 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 253], 00:32:09.252 | 99.00th=[ 281], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:09.252 | 99.99th=[41157] 00:32:09.252 write: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec); 0 zone resets 00:32:09.252 slat (nsec): min=9196, max=37699, avg=10181.54, stdev=1323.72 00:32:09.252 clat (usec): min=117, max=353, avg=143.85, stdev=20.42 00:32:09.252 lat (usec): min=127, max=391, avg=154.03, stdev=20.68 00:32:09.252 clat percentiles (usec): 00:32:09.252 | 1.00th=[ 123], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 129], 00:32:09.252 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:32:09.252 | 70.00th=[ 155], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 182], 00:32:09.252 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 235], 99.95th=[ 355], 00:32:09.252 | 99.99th=[ 355] 00:32:09.252 bw ( KiB/s): min= 4096, max= 8192, per=51.45%, avg=6144.00, stdev=2896.31, samples=2 00:32:09.252 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:32:09.252 lat (usec) : 250=94.64%, 500=4.99% 00:32:09.252 lat (msec) : 50=0.37% 00:32:09.252 cpu : usr=1.68%, sys=2.48%, ctx=2966, majf=0, minf=1 00:32:09.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.252 issued rwts: total=1430,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:09.252 00:32:09.252 Run status group 0 (all jobs): 00:32:09.252 READ: bw=5819KiB/s (5959kB/s), 87.7KiB/s-5663KiB/s (89.8kB/s-5799kB/s), io=5988KiB (6132kB), run=1001-1029msec 00:32:09.252 WRITE: bw=11.7MiB/s (12.2MB/s), 1990KiB/s-6083KiB/s (2038kB/s-6229kB/s), io=12.0MiB (12.6MB), run=1001-1029msec 00:32:09.252 00:32:09.252 Disk stats (read/write): 00:32:09.252 nvme0n1: ios=47/512, merge=0/0, ticks=801/80, in_queue=881, util=90.68% 00:32:09.252 nvme0n2: ios=66/512, merge=0/0, ticks=1657/96, in_queue=1753, util=95.53% 00:32:09.252 nvme0n3: ios=49/512, merge=0/0, ticks=1727/81, in_queue=1808, util=98.44% 00:32:09.252 nvme0n4: ios=1054/1315, merge=0/0, ticks=663/190, in_queue=853, util=90.24% 00:32:09.252 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:09.252 [global] 00:32:09.252 thread=1 00:32:09.252 invalidate=1 00:32:09.252 rw=write 00:32:09.252 time_based=1 00:32:09.252 runtime=1 00:32:09.252 ioengine=libaio 00:32:09.252 direct=1 00:32:09.252 bs=4096 00:32:09.252 iodepth=128 00:32:09.252 norandommap=0 00:32:09.252 numjobs=1 00:32:09.252 00:32:09.252 verify_dump=1 00:32:09.252 verify_backlog=512 00:32:09.252 verify_state_save=0 00:32:09.252 do_verify=1 00:32:09.252 verify=crc32c-intel 00:32:09.252 [job0] 00:32:09.252 filename=/dev/nvme0n1 00:32:09.252 [job1] 00:32:09.252 filename=/dev/nvme0n2 00:32:09.252 [job2] 00:32:09.252 filename=/dev/nvme0n3 00:32:09.252 [job3] 00:32:09.252 filename=/dev/nvme0n4 00:32:09.252 Could not set queue depth (nvme0n1) 00:32:09.252 Could not set queue depth (nvme0n2) 00:32:09.252 Could not set queue depth (nvme0n3) 00:32:09.252 Could not set queue depth (nvme0n4) 00:32:09.252 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:09.252 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:09.252 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:09.252 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:09.252 fio-3.35 00:32:09.252 Starting 4 threads 00:32:10.623 00:32:10.623 job0: (groupid=0, jobs=1): err= 0: pid=1962841: Fri Dec 6 11:33:43 2024 00:32:10.623 read: IOPS=3411, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1009msec) 00:32:10.623 slat (nsec): min=1262, max=15036k, avg=154466.84, stdev=937476.02 00:32:10.623 clat (msec): min=3, max=100, avg=14.86, stdev=13.05 00:32:10.623 lat (msec): min=3, max=100, avg=15.02, stdev=13.18 00:32:10.623 clat percentiles (msec): 00:32:10.623 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:32:10.623 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:32:10.623 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 25], 95.00th=[ 44], 00:32:10.623 | 99.00th=[ 80], 99.50th=[ 90], 99.90th=[ 101], 99.95th=[ 101], 00:32:10.623 | 99.99th=[ 101] 00:32:10.623 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:32:10.624 slat (nsec): min=1878, max=14001k, avg=126004.07, stdev=690794.54 00:32:10.624 clat (usec): min=1772, max=100486, avg=21428.72, stdev=15927.59 00:32:10.624 lat (usec): min=1787, max=100491, avg=21554.72, stdev=15985.90 00:32:10.624 clat percentiles (msec): 00:32:10.624 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 10], 00:32:10.624 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 17], 00:32:10.624 | 70.00th=[ 21], 80.00th=[ 28], 90.00th=[ 46], 95.00th=[ 55], 00:32:10.624 | 99.00th=[ 85], 99.50th=[ 85], 99.90th=[ 87], 99.95th=[ 101], 00:32:10.624 | 99.99th=[ 101] 00:32:10.624 bw ( KiB/s): min=12752, max=15920, per=20.05%, avg=14336.00, stdev=2240.11, samples=2 00:32:10.624 iops : min= 3188, max= 3980, avg=3584.00, stdev=560.03, samples=2 00:32:10.624 lat (msec) : 2=0.04%, 4=1.08%, 10=27.64%, 20=48.85%, 50=17.46% 00:32:10.624 lat (msec) : 100=4.82%, 250=0.10% 00:32:10.624 cpu : usr=2.38%, sys=4.76%, ctx=396, majf=0, minf=1 00:32:10.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:10.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:10.624 issued rwts: total=3442,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:10.624 job1: (groupid=0, jobs=1): err= 0: pid=1962842: Fri Dec 6 11:33:43 2024 00:32:10.624 read: IOPS=6530, BW=25.5MiB/s (26.7MB/s)(25.7MiB/1009msec) 00:32:10.624 slat (nsec): min=1251, max=18591k, avg=78319.09, stdev=666678.90 00:32:10.624 clat (usec): min=2933, max=36603, avg=10681.61, stdev=4084.03 00:32:10.624 lat (usec): min=2938, max=41657, avg=10759.93, stdev=4133.39 00:32:10.624 clat percentiles (usec): 00:32:10.624 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 7701], 00:32:10.624 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10421], 00:32:10.624 | 70.00th=[11338], 80.00th=[12518], 90.00th=[15401], 95.00th=[17957], 00:32:10.624 | 99.00th=[29754], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:32:10.624 | 99.99th=[36439] 00:32:10.624 write: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec); 0 zone resets 00:32:10.624 slat (usec): min=2, max=11503, avg=65.79, stdev=535.48 00:32:10.624 clat (usec): min=644, max=24760, avg=8620.95, stdev=2493.06 00:32:10.624 lat (usec): min=657, max=24776, avg=8686.74, stdev=2528.12 00:32:10.624 clat percentiles (usec): 00:32:10.624 | 1.00th=[ 3785], 5.00th=[ 4817], 10.00th=[ 5735], 20.00th=[ 6718], 00:32:10.624 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 8848], 00:32:10.624 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[12649], 95.00th=[13304], 00:32:10.624 | 99.00th=[14615], 99.50th=[15139], 99.90th=[16909], 99.95th=[17433], 00:32:10.624 | 99.99th=[24773] 00:32:10.624 bw ( KiB/s): min=24576, max=28672, per=37.23%, avg=26624.00, stdev=2896.31, samples=2 00:32:10.624 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:32:10.624 lat (usec) : 750=0.07% 00:32:10.624 lat (msec) : 2=0.07%, 4=0.69%, 10=63.30%, 20=34.13%, 50=1.74% 00:32:10.624 cpu : usr=5.46%, sys=8.43%, ctx=344, majf=0, minf=1 00:32:10.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:32:10.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:10.624 issued rwts: total=6589,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:10.624 job2: (groupid=0, jobs=1): err= 0: pid=1962844: Fri Dec 6 11:33:43 2024 00:32:10.624 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:32:10.624 slat (nsec): min=1233, max=11420k, avg=106768.60, stdev=715540.96 00:32:10.624 clat (usec): min=3256, max=29497, avg=12548.72, stdev=4772.37 00:32:10.624 lat (usec): min=3265, max=29499, avg=12655.49, stdev=4815.63 00:32:10.624 clat percentiles (usec): 00:32:10.624 | 1.00th=[ 3851], 5.00th=[ 8717], 10.00th=[ 8848], 20.00th=[ 9110], 00:32:10.624 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[11469], 00:32:10.624 | 70.00th=[14353], 80.00th=[16319], 90.00th=[20579], 95.00th=[22676], 00:32:10.624 | 99.00th=[25035], 99.50th=[27132], 99.90th=[29492], 99.95th=[29492], 00:32:10.624 | 99.99th=[29492] 00:32:10.624 write: IOPS=3232, BW=12.6MiB/s (13.2MB/s)(12.8MiB/1014msec); 0 zone resets 00:32:10.624 slat (nsec): min=1947, max=12606k, avg=201782.78, stdev=970556.75 00:32:10.624 clat (usec): min=1415, max=88484, avg=27467.33, stdev=19197.73 00:32:10.624 lat (usec): min=1427, max=88495, avg=27669.11, stdev=19289.42 00:32:10.624 clat percentiles (usec): 00:32:10.624 | 1.00th=[ 3195], 5.00th=[ 7177], 10.00th=[12649], 20.00th=[15533], 00:32:10.624 | 30.00th=[15795], 40.00th=[16581], 50.00th=[18744], 60.00th=[22938], 00:32:10.624 | 70.00th=[30278], 80.00th=[40109], 90.00th=[58459], 95.00th=[73925], 00:32:10.624 | 99.00th=[84411], 99.50th=[86508], 99.90th=[88605], 99.95th=[88605], 00:32:10.624 | 99.99th=[88605] 00:32:10.624 bw ( KiB/s): min=12288, max=12912, per=17.62%, avg=12600.00, stdev=441.23, samples=2 00:32:10.624 iops : min= 3072, max= 3228, avg=3150.00, stdev=110.31, samples=2 00:32:10.624 lat (msec) : 2=0.05%, 4=1.56%, 10=22.65%, 20=46.35%, 50=22.49% 00:32:10.624 lat (msec) : 100=6.91% 00:32:10.624 cpu : usr=2.17%, sys=3.55%, ctx=398, majf=0, minf=1 00:32:10.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:10.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:10.624 issued rwts: total=3072,3278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:10.624 job3: (groupid=0, jobs=1): err= 0: pid=1962850: Fri Dec 6 11:33:43 2024 00:32:10.624 read: IOPS=4097, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1009msec) 00:32:10.624 slat (nsec): min=982, max=15728k, avg=104294.40, stdev=870554.20 00:32:10.624 clat (usec): min=1318, max=51078, avg=14898.12, stdev=8379.13 00:32:10.624 lat (usec): min=1341, max=53195, avg=15002.41, stdev=8445.42 00:32:10.624 clat percentiles (usec): 00:32:10.624 | 1.00th=[ 4686], 5.00th=[ 7046], 10.00th=[ 8291], 20.00th=[ 9372], 00:32:10.624 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10552], 60.00th=[13435], 00:32:10.624 | 70.00th=[16188], 80.00th=[21365], 90.00th=[27395], 95.00th=[35390], 00:32:10.624 | 99.00th=[38536], 99.50th=[41157], 99.90th=[51119], 99.95th=[51119], 00:32:10.624 | 99.99th=[51119] 00:32:10.624 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:32:10.624 slat (nsec): min=1686, max=17892k, avg=98869.32, stdev=886297.11 00:32:10.624 clat (usec): min=3966, max=47739, avg=14389.93, stdev=7346.84 00:32:10.624 lat (usec): min=3969, max=47766, avg=14488.80, stdev=7433.90 00:32:10.624 clat percentiles (usec): 00:32:10.624 | 1.00th=[ 4948], 5.00th=[ 6128], 10.00th=[ 7373], 20.00th=[ 8717], 00:32:10.624 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10552], 60.00th=[13829], 00:32:10.624 | 70.00th=[19268], 80.00th=[20579], 90.00th=[25035], 95.00th=[29754], 00:32:10.624 | 99.00th=[36439], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:32:10.624 | 99.99th=[47973] 00:32:10.624 bw ( KiB/s): min=16544, max=19600, per=25.27%, avg=18072.00, stdev=2160.92, samples=2 00:32:10.624 iops : min= 4136, max= 4900, avg=4518.00, stdev=540.23, samples=2 00:32:10.624 lat (msec) : 2=0.25%, 4=0.08%, 10=32.93%, 20=42.47%, 50=24.14% 00:32:10.624 lat (msec) : 100=0.13% 00:32:10.624 cpu : usr=2.38%, sys=4.56%, ctx=261, majf=0, minf=1 00:32:10.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:10.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:10.624 issued rwts: total=4134,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:10.624 00:32:10.624 Run status group 0 (all jobs): 00:32:10.624 READ: bw=66.4MiB/s (69.6MB/s), 11.8MiB/s-25.5MiB/s (12.4MB/s-26.7MB/s), io=67.3MiB (70.6MB), run=1009-1014msec 00:32:10.624 WRITE: bw=69.8MiB/s (73.2MB/s), 12.6MiB/s-25.8MiB/s (13.2MB/s-27.0MB/s), io=70.8MiB (74.2MB), run=1009-1014msec 00:32:10.624 00:32:10.624 Disk stats (read/write): 00:32:10.624 nvme0n1: ios=2591/3047, merge=0/0, ticks=33224/61884, in_queue=95108, util=83.37% 00:32:10.624 nvme0n2: ios=5171/5632, merge=0/0, ticks=51693/46211, in_queue=97904, util=97.64% 00:32:10.624 nvme0n3: ios=2560/2583, merge=0/0, ticks=31792/66535, in_queue=98327, util=87.55% 00:32:10.624 nvme0n4: ios=3072/3482, merge=0/0, ticks=33000/32909, in_queue=65909, util=89.20% 00:32:10.624 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:10.624 [global] 00:32:10.624 thread=1 00:32:10.624 invalidate=1 00:32:10.624 rw=randwrite 00:32:10.624 time_based=1 00:32:10.624 runtime=1 00:32:10.624 ioengine=libaio 00:32:10.624 direct=1 00:32:10.624 bs=4096 00:32:10.624 iodepth=128 00:32:10.624 norandommap=0 00:32:10.624 numjobs=1 00:32:10.624 00:32:10.624 verify_dump=1 00:32:10.624 verify_backlog=512 00:32:10.624 verify_state_save=0 00:32:10.624 do_verify=1 00:32:10.624 verify=crc32c-intel 00:32:10.624 [job0] 00:32:10.624 filename=/dev/nvme0n1 00:32:10.624 [job1] 00:32:10.624 filename=/dev/nvme0n2 00:32:10.624 [job2] 00:32:10.624 filename=/dev/nvme0n3 00:32:10.624 [job3] 00:32:10.624 filename=/dev/nvme0n4 00:32:10.624 Could not set queue depth (nvme0n1) 00:32:10.625 Could not set queue depth (nvme0n2) 00:32:10.625 Could not set queue depth (nvme0n3) 00:32:10.625 Could not set queue depth (nvme0n4) 00:32:10.882 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.882 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.882 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.882 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.882 fio-3.35 00:32:10.882 Starting 4 threads 00:32:12.259 00:32:12.259 job0: (groupid=0, jobs=1): err= 0: pid=1963264: Fri Dec 6 11:33:45 2024 00:32:12.259 read: IOPS=5280, BW=20.6MiB/s (21.6MB/s)(20.8MiB/1009msec) 00:32:12.259 slat (nsec): min=1020, max=10037k, avg=91216.62, stdev=552817.19 00:32:12.259 clat (usec): min=1205, max=29174, avg=11714.61, stdev=3480.01 00:32:12.259 lat (usec): min=5701, max=29190, avg=11805.82, stdev=3502.44 00:32:12.259 clat percentiles (usec): 00:32:12.259 | 1.00th=[ 6325], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9503], 00:32:12.259 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10814], 60.00th=[11207], 00:32:12.259 | 70.00th=[12125], 80.00th=[13173], 90.00th=[16319], 95.00th=[19006], 00:32:12.259 | 99.00th=[26084], 99.50th=[26346], 99.90th=[26870], 99.95th=[26870], 00:32:12.259 | 99.99th=[29230] 00:32:12.259 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:32:12.259 slat (nsec): min=1953, max=21323k, avg=87751.00, stdev=611249.59 00:32:12.259 clat (usec): min=3851, max=53117, avg=11648.68, stdev=4713.87 00:32:12.259 lat (usec): min=3862, max=53133, avg=11736.43, stdev=4752.78 00:32:12.259 clat percentiles (usec): 00:32:12.259 | 1.00th=[ 5145], 5.00th=[ 8029], 10.00th=[ 9372], 20.00th=[ 9634], 00:32:12.259 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[10945], 00:32:12.259 | 70.00th=[11076], 80.00th=[12125], 90.00th=[14353], 95.00th=[18482], 00:32:12.259 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:32:12.259 | 99.99th=[53216] 00:32:12.259 bw ( KiB/s): min=20480, max=24576, per=26.12%, avg=22528.00, stdev=2896.31, samples=2 00:32:12.259 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:32:12.259 lat (msec) : 2=0.01%, 4=0.25%, 10=28.28%, 20=67.79%, 50=3.66% 00:32:12.259 lat (msec) : 100=0.01% 00:32:12.259 cpu : usr=4.07%, sys=4.76%, ctx=527, majf=0, minf=1 00:32:12.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:12.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.259 issued rwts: total=5328,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.259 job1: (groupid=0, jobs=1): err= 0: pid=1963265: Fri Dec 6 11:33:45 2024 00:32:12.259 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:32:12.259 slat (nsec): min=1223, max=21445k, avg=95195.07, stdev=792744.46 00:32:12.259 clat (usec): min=3053, max=38681, avg=12917.29, stdev=4741.75 00:32:12.259 lat (usec): min=3062, max=38701, avg=13012.49, stdev=4800.43 00:32:12.259 clat percentiles (usec): 00:32:12.259 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9503], 00:32:12.259 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[12387], 00:32:12.259 | 70.00th=[14091], 80.00th=[16057], 90.00th=[20055], 95.00th=[22676], 00:32:12.259 | 99.00th=[28443], 99.50th=[28443], 99.90th=[30802], 99.95th=[33162], 00:32:12.259 | 99.99th=[38536] 00:32:12.259 write: IOPS=5353, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1004msec); 0 zone resets 00:32:12.259 slat (usec): min=2, max=16726, avg=89.26, stdev=682.48 00:32:12.259 clat (usec): min=212, max=33483, avg=11262.64, stdev=3975.34 00:32:12.259 lat (usec): min=459, max=33517, avg=11351.90, stdev=4026.71 00:32:12.259 clat percentiles (usec): 00:32:12.259 | 1.00th=[ 3392], 5.00th=[ 6259], 10.00th=[ 7963], 20.00th=[ 9241], 00:32:12.259 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:32:12.259 | 70.00th=[10945], 80.00th=[12911], 90.00th=[16450], 95.00th=[19792], 00:32:12.259 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:32:12.259 | 99.99th=[33424] 00:32:12.259 bw ( KiB/s): min=18736, max=23240, per=24.33%, avg=20988.00, stdev=3184.81, samples=2 00:32:12.259 iops : min= 4684, max= 5810, avg=5247.00, stdev=796.20, samples=2 00:32:12.259 lat (usec) : 250=0.01%, 500=0.03%, 750=0.01% 00:32:12.259 lat (msec) : 2=0.21%, 4=0.49%, 10=29.47%, 20=62.94%, 50=6.84% 00:32:12.259 cpu : usr=3.79%, sys=6.88%, ctx=390, majf=0, minf=1 00:32:12.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:12.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.259 issued rwts: total=5120,5375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.259 job2: (groupid=0, jobs=1): err= 0: pid=1963266: Fri Dec 6 11:33:45 2024 00:32:12.259 read: IOPS=4692, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1009msec) 00:32:12.259 slat (nsec): min=1053, max=14361k, avg=106376.92, stdev=786809.43 00:32:12.259 clat (usec): min=2558, max=47293, avg=14148.28, stdev=5277.44 00:32:12.259 lat (usec): min=4472, max=47296, avg=14254.66, stdev=5326.31 00:32:12.259 clat percentiles (usec): 00:32:12.259 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 9110], 20.00th=[10814], 00:32:12.259 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[13698], 00:32:12.259 | 70.00th=[15401], 80.00th=[17433], 90.00th=[20317], 95.00th=[26084], 00:32:12.259 | 99.00th=[31589], 99.50th=[34341], 99.90th=[40633], 99.95th=[40633], 00:32:12.259 | 99.99th=[47449] 00:32:12.259 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:32:12.259 slat (nsec): min=1712, max=14659k, avg=78608.35, stdev=541660.40 00:32:12.259 clat (usec): min=359, max=40569, avg=11926.70, stdev=4411.91 00:32:12.259 lat (usec): min=371, max=40575, avg=12005.31, stdev=4437.13 00:32:12.259 clat percentiles (usec): 00:32:12.259 | 1.00th=[ 2376], 5.00th=[ 5080], 10.00th=[ 7570], 20.00th=[ 9503], 00:32:12.259 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11994], 60.00th=[12387], 00:32:12.259 | 70.00th=[12518], 80.00th=[13435], 90.00th=[16319], 95.00th=[19530], 00:32:12.259 | 99.00th=[27657], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:32:12.259 | 99.99th=[40633] 00:32:12.259 bw ( KiB/s): min=19968, max=20984, per=23.74%, avg=20476.00, stdev=718.42, samples=2 00:32:12.259 iops : min= 4992, max= 5246, avg=5119.00, stdev=179.61, samples=2 00:32:12.259 lat (usec) : 500=0.01%, 750=0.02% 00:32:12.259 lat (msec) : 2=0.46%, 4=1.57%, 10=16.42%, 20=72.95%, 50=8.57% 00:32:12.259 cpu : usr=3.37%, sys=4.96%, ctx=447, majf=0, minf=2 00:32:12.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:12.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.259 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.259 job3: (groupid=0, jobs=1): err= 0: pid=1963267: Fri Dec 6 11:33:45 2024 00:32:12.259 read: IOPS=5419, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1007msec) 00:32:12.259 slat (nsec): min=1339, max=11466k, avg=94122.25, stdev=789031.59 00:32:12.259 clat (usec): min=4242, max=23090, avg=11932.49, stdev=3017.59 00:32:12.260 lat (usec): min=4248, max=29263, avg=12026.61, stdev=3101.27 00:32:12.260 clat percentiles (usec): 00:32:12.260 | 1.00th=[ 6325], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:32:12.260 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11600], 00:32:12.260 | 70.00th=[12256], 80.00th=[13304], 90.00th=[16909], 95.00th=[18744], 00:32:12.260 | 99.00th=[21103], 99.50th=[21365], 99.90th=[21890], 99.95th=[22676], 00:32:12.260 | 99.99th=[23200] 00:32:12.260 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:32:12.260 slat (nsec): min=1947, max=15160k, avg=80998.90, stdev=654136.59 00:32:12.260 clat (usec): min=1574, max=26839, avg=11113.78, stdev=3241.81 00:32:12.260 lat (usec): min=1587, max=26849, avg=11194.77, stdev=3277.97 00:32:12.260 clat percentiles (usec): 00:32:12.260 | 1.00th=[ 4080], 5.00th=[ 6849], 10.00th=[ 7570], 20.00th=[ 9110], 00:32:12.260 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:32:12.260 | 70.00th=[11994], 80.00th=[12256], 90.00th=[14746], 95.00th=[15926], 00:32:12.260 | 99.00th=[25822], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:32:12.260 | 99.99th=[26870] 00:32:12.260 bw ( KiB/s): min=20480, max=24576, per=26.12%, avg=22528.00, stdev=2896.31, samples=2 00:32:12.260 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:32:12.260 lat (msec) : 2=0.03%, 4=0.42%, 10=29.09%, 20=68.02%, 50=2.43% 00:32:12.260 cpu : usr=3.48%, sys=6.96%, ctx=355, majf=0, minf=1 00:32:12.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:12.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.260 issued rwts: total=5457,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.260 00:32:12.260 Run status group 0 (all jobs): 00:32:12.260 READ: bw=79.9MiB/s (83.8MB/s), 18.3MiB/s-21.2MiB/s (19.2MB/s-22.2MB/s), io=80.6MiB (84.5MB), run=1004-1009msec 00:32:12.260 WRITE: bw=84.2MiB/s (88.3MB/s), 19.8MiB/s-21.8MiB/s (20.8MB/s-22.9MB/s), io=85.0MiB (89.1MB), run=1004-1009msec 00:32:12.260 00:32:12.260 Disk stats (read/write): 00:32:12.260 nvme0n1: ios=4632/4731, merge=0/0, ticks=22916/22746, in_queue=45662, util=93.99% 00:32:12.260 nvme0n2: ios=4146/4519, merge=0/0, ticks=49551/40887, in_queue=90438, util=98.17% 00:32:12.260 nvme0n3: ios=4091/4096, merge=0/0, ticks=37929/29774, in_queue=67703, util=100.00% 00:32:12.260 nvme0n4: ios=4631/4851, merge=0/0, ticks=54159/50545, in_queue=104704, util=99.47% 00:32:12.260 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:12.260 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1963374 00:32:12.260 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:12.260 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:12.260 [global] 00:32:12.260 thread=1 00:32:12.260 invalidate=1 00:32:12.260 rw=read 00:32:12.260 time_based=1 00:32:12.260 runtime=10 00:32:12.260 ioengine=libaio 00:32:12.260 direct=1 00:32:12.260 bs=4096 00:32:12.260 iodepth=1 00:32:12.260 norandommap=1 00:32:12.260 numjobs=1 00:32:12.260 00:32:12.260 [job0] 00:32:12.260 filename=/dev/nvme0n1 00:32:12.260 [job1] 00:32:12.260 filename=/dev/nvme0n2 00:32:12.260 [job2] 00:32:12.260 filename=/dev/nvme0n3 00:32:12.260 [job3] 00:32:12.260 filename=/dev/nvme0n4 00:32:12.260 Could not set queue depth (nvme0n1) 00:32:12.260 Could not set queue depth (nvme0n2) 00:32:12.260 Could not set queue depth (nvme0n3) 00:32:12.260 Could not set queue depth (nvme0n4) 00:32:12.517 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.517 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.517 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.517 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.517 fio-3.35 00:32:12.517 Starting 4 threads 00:32:15.797 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:15.797 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42262528, buflen=4096 00:32:15.797 fio: pid=1963691, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:15.797 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:15.797 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.797 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:15.797 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=565248, buflen=4096 00:32:15.797 fio: pid=1963690, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:15.797 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=48742400, buflen=4096 00:32:15.797 fio: pid=1963688, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:15.797 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.797 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:16.057 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:16.057 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:16.057 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=327680, buflen=4096 00:32:16.057 fio: pid=1963689, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:16.057 00:32:16.057 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1963688: Fri Dec 6 11:33:48 2024 00:32:16.057 read: IOPS=3857, BW=15.1MiB/s (15.8MB/s)(46.5MiB/3085msec) 00:32:16.057 slat (usec): min=3, max=22833, avg=10.64, stdev=223.83 00:32:16.057 clat (usec): min=176, max=41124, avg=245.03, stdev=916.16 00:32:16.057 lat (usec): min=183, max=41143, avg=255.67, stdev=943.51 00:32:16.057 clat percentiles (usec): 00:32:16.057 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 217], 00:32:16.057 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 227], 00:32:16.057 | 70.00th=[ 231], 80.00th=[ 233], 90.00th=[ 239], 95.00th=[ 243], 00:32:16.057 | 99.00th=[ 260], 99.50th=[ 277], 99.90th=[ 494], 99.95th=[41157], 00:32:16.057 | 99.99th=[41157] 00:32:16.057 bw ( KiB/s): min= 9256, max=16928, per=56.54%, avg=15315.20, stdev=3387.79, samples=5 00:32:16.057 iops : min= 2314, max= 4232, avg=3828.80, stdev=846.95, samples=5 00:32:16.057 lat (usec) : 250=97.78%, 500=2.12%, 750=0.02% 00:32:16.057 lat (msec) : 2=0.03%, 50=0.05% 00:32:16.057 cpu : usr=1.91%, sys=6.10%, ctx=11904, majf=0, minf=1 00:32:16.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.057 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.057 issued rwts: total=11901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.057 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1963689: Fri Dec 6 11:33:48 2024 00:32:16.057 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(320KiB/3313msec) 00:32:16.057 slat (usec): min=11, max=10729, avg=157.86, stdev=1191.57 00:32:16.057 clat (usec): min=40783, max=41289, avg=40983.99, stdev=66.75 00:32:16.057 lat (usec): min=40817, max=51957, avg=41143.54, stdev=1230.07 00:32:16.057 clat percentiles (usec): 00:32:16.057 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:16.057 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:16.057 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:16.057 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:16.057 | 99.99th=[41157] 00:32:16.057 bw ( KiB/s): min= 93, max= 104, per=0.35%, avg=96.83, stdev= 3.71, samples=6 00:32:16.057 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:32:16.057 lat (msec) : 50=98.77% 00:32:16.057 cpu : usr=0.09%, sys=0.00%, ctx=83, majf=0, minf=2 00:32:16.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.057 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.057 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.057 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1963690: Fri Dec 6 11:33:48 2024 00:32:16.057 read: IOPS=47, BW=189KiB/s (194kB/s)(552KiB/2921msec) 00:32:16.057 slat (nsec): min=7073, max=32170, avg=16432.42, stdev=7528.73 00:32:16.057 clat (usec): min=219, max=42409, avg=20991.88, stdev=20523.88 00:32:16.057 lat (usec): min=226, max=42417, avg=21008.25, stdev=20521.27 00:32:16.057 clat percentiles (usec): 00:32:16.057 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 241], 00:32:16.057 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[40633], 60.00th=[40633], 00:32:16.057 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:32:16.057 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:16.057 | 99.99th=[42206] 00:32:16.057 bw ( KiB/s): min= 144, max= 240, per=0.74%, avg=200.00, stdev=35.78, samples=5 00:32:16.057 iops : min= 36, max= 60, avg=50.00, stdev= 8.94, samples=5 00:32:16.057 lat (usec) : 250=37.41%, 500=11.51% 00:32:16.057 lat (msec) : 50=50.36% 00:32:16.057 cpu : usr=0.14%, sys=0.00%, ctx=141, majf=0, minf=2 00:32:16.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.057 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.057 issued rwts: total=139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.057 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1963691: Fri Dec 6 11:33:48 2024 00:32:16.057 read: IOPS=3843, BW=15.0MiB/s (15.7MB/s)(40.3MiB/2685msec) 00:32:16.057 slat (nsec): min=7081, max=39878, avg=8393.17, stdev=1231.96 00:32:16.057 clat (usec): min=205, max=1463, avg=247.86, stdev=14.35 00:32:16.057 lat (usec): min=214, max=1470, avg=256.25, stdev=14.27 00:32:16.057 clat percentiles (usec): 00:32:16.057 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 243], 00:32:16.057 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 247], 60.00th=[ 249], 00:32:16.057 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 258], 00:32:16.057 | 99.00th=[ 265], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 351], 00:32:16.057 | 99.99th=[ 506] 00:32:16.057 bw ( KiB/s): min=15480, max=15552, per=57.29%, avg=15518.40, stdev=26.17, samples=5 00:32:16.057 iops : min= 3870, max= 3888, avg=3879.60, stdev= 6.54, samples=5 00:32:16.057 lat (usec) : 250=67.55%, 500=32.41%, 750=0.02% 00:32:16.057 lat (msec) : 2=0.01% 00:32:16.057 cpu : usr=2.20%, sys=6.22%, ctx=10320, majf=0, minf=2 00:32:16.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.057 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.057 issued rwts: total=10319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.057 00:32:16.057 Run status group 0 (all jobs): 00:32:16.057 READ: bw=26.5MiB/s (27.7MB/s), 96.6KiB/s-15.1MiB/s (98.9kB/s-15.8MB/s), io=87.6MiB (91.9MB), run=2685-3313msec 00:32:16.057 00:32:16.057 Disk stats (read/write): 00:32:16.057 nvme0n1: ios=11052/0, merge=0/0, ticks=2568/0, in_queue=2568, util=95.36% 00:32:16.057 nvme0n2: ios=75/0, merge=0/0, ticks=3075/0, in_queue=3075, util=95.79% 00:32:16.057 nvme0n3: ios=181/0, merge=0/0, ticks=3792/0, in_queue=3792, util=98.99% 00:32:16.057 nvme0n4: ios=10090/0, merge=0/0, ticks=2365/0, in_queue=2365, util=96.48% 00:32:16.316 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:16.316 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:16.316 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:16.317 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:16.575 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:16.575 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:16.834 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:16.834 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1963374 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:17.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:17.094 nvmf hotplug test: fio failed as expected 00:32:17.094 11:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:17.353 rmmod nvme_tcp 00:32:17.353 rmmod nvme_fabrics 00:32:17.353 rmmod nvme_keyring 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1960528 ']' 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1960528 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1960528 ']' 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1960528 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1960528 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1960528' 00:32:17.353 killing process with pid 1960528 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1960528 00:32:17.353 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1960528 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.613 11:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:20.152 00:32:20.152 real 0m26.248s 00:32:20.152 user 1m44.071s 00:32:20.152 sys 0m11.351s 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.152 ************************************ 00:32:20.152 END TEST nvmf_fio_target 00:32:20.152 ************************************ 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:20.152 ************************************ 00:32:20.152 START TEST nvmf_bdevio 00:32:20.152 ************************************ 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:20.152 * Looking for test storage... 00:32:20.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.152 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:20.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.153 --rc genhtml_branch_coverage=1 00:32:20.153 --rc genhtml_function_coverage=1 00:32:20.153 --rc genhtml_legend=1 00:32:20.153 --rc geninfo_all_blocks=1 00:32:20.153 --rc geninfo_unexecuted_blocks=1 00:32:20.153 00:32:20.153 ' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:20.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.153 --rc genhtml_branch_coverage=1 00:32:20.153 --rc genhtml_function_coverage=1 00:32:20.153 --rc genhtml_legend=1 00:32:20.153 --rc geninfo_all_blocks=1 00:32:20.153 --rc geninfo_unexecuted_blocks=1 00:32:20.153 00:32:20.153 ' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:20.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.153 --rc genhtml_branch_coverage=1 00:32:20.153 --rc genhtml_function_coverage=1 00:32:20.153 --rc genhtml_legend=1 00:32:20.153 --rc geninfo_all_blocks=1 00:32:20.153 --rc geninfo_unexecuted_blocks=1 00:32:20.153 00:32:20.153 ' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:20.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.153 --rc genhtml_branch_coverage=1 00:32:20.153 --rc genhtml_function_coverage=1 00:32:20.153 --rc genhtml_legend=1 00:32:20.153 --rc geninfo_all_blocks=1 00:32:20.153 --rc geninfo_unexecuted_blocks=1 00:32:20.153 00:32:20.153 ' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:20.153 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:26.720 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:26.720 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:26.720 Found net devices under 0000:af:00.0: cvl_0_0 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:26.720 Found net devices under 0000:af:00.1: cvl_0_1 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:26.720 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:26.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:32:26.721 00:32:26.721 --- 10.0.0.2 ping statistics --- 00:32:26.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.721 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:32:26.721 00:32:26.721 --- 10.0.0.1 ping statistics --- 00:32:26.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.721 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1967976 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1967976 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1967976 ']' 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.721 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.721 [2024-12-06 11:33:58.819185] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:26.721 [2024-12-06 11:33:58.820046] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:32:26.721 [2024-12-06 11:33:58.820083] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.721 [2024-12-06 11:33:58.899319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:26.721 [2024-12-06 11:33:58.937911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.721 [2024-12-06 11:33:58.937946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.721 [2024-12-06 11:33:58.937953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.721 [2024-12-06 11:33:58.937958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.721 [2024-12-06 11:33:58.937963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.721 [2024-12-06 11:33:58.939373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:26.721 [2024-12-06 11:33:58.939490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:26.721 [2024-12-06 11:33:58.939608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:26.721 [2024-12-06 11:33:58.939609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:26.721 [2024-12-06 11:33:59.005008] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:26.721 [2024-12-06 11:33:59.005322] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:26.721 [2024-12-06 11:33:59.005791] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:26.721 [2024-12-06 11:33:59.005955] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:26.721 [2024-12-06 11:33:59.006017] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:26.721 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.721 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:26.721 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:26.721 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:26.721 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.980 [2024-12-06 11:33:59.684442] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.980 Malloc0 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.980 [2024-12-06 11:33:59.768575] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.980 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:26.981 { 00:32:26.981 "params": { 00:32:26.981 "name": "Nvme$subsystem", 00:32:26.981 "trtype": "$TEST_TRANSPORT", 00:32:26.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:26.981 "adrfam": "ipv4", 00:32:26.981 "trsvcid": "$NVMF_PORT", 00:32:26.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:26.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:26.981 "hdgst": ${hdgst:-false}, 00:32:26.981 "ddgst": ${ddgst:-false} 00:32:26.981 }, 00:32:26.981 "method": "bdev_nvme_attach_controller" 00:32:26.981 } 00:32:26.981 EOF 00:32:26.981 )") 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:26.981 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:26.981 "params": { 00:32:26.981 "name": "Nvme1", 00:32:26.981 "trtype": "tcp", 00:32:26.981 "traddr": "10.0.0.2", 00:32:26.981 "adrfam": "ipv4", 00:32:26.981 "trsvcid": "4420", 00:32:26.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:26.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:26.981 "hdgst": false, 00:32:26.981 "ddgst": false 00:32:26.981 }, 00:32:26.981 "method": "bdev_nvme_attach_controller" 00:32:26.981 }' 00:32:26.981 [2024-12-06 11:33:59.818301] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:32:26.981 [2024-12-06 11:33:59.818346] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968257 ] 00:32:26.981 [2024-12-06 11:33:59.892242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:27.240 [2024-12-06 11:33:59.934232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.240 [2024-12-06 11:33:59.934343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.240 [2024-12-06 11:33:59.934344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:27.240 I/O targets: 00:32:27.240 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:27.240 00:32:27.240 00:32:27.240 CUnit - A unit testing framework for C - Version 2.1-3 00:32:27.240 http://cunit.sourceforge.net/ 00:32:27.240 00:32:27.240 00:32:27.240 Suite: bdevio tests on: Nvme1n1 00:32:27.240 Test: blockdev write read block ...passed 00:32:27.240 Test: blockdev write zeroes read block ...passed 00:32:27.240 Test: blockdev write zeroes read no split ...passed 00:32:27.498 Test: blockdev write zeroes read split ...passed 00:32:27.498 Test: blockdev write zeroes read split partial ...passed 00:32:27.498 Test: blockdev reset ...[2024-12-06 11:34:00.276625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:27.498 [2024-12-06 11:34:00.276687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100b400 (9): Bad file descriptor 00:32:27.498 [2024-12-06 11:34:00.280133] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:27.498 passed 00:32:27.498 Test: blockdev write read 8 blocks ...passed 00:32:27.498 Test: blockdev write read size > 128k ...passed 00:32:27.498 Test: blockdev write read invalid size ...passed 00:32:27.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:27.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:27.498 Test: blockdev write read max offset ...passed 00:32:27.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:27.757 Test: blockdev writev readv 8 blocks ...passed 00:32:27.757 Test: blockdev writev readv 30 x 1block ...passed 00:32:27.757 Test: blockdev writev readv block ...passed 00:32:27.757 Test: blockdev writev readv size > 128k ...passed 00:32:27.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:27.757 Test: blockdev comparev and writev ...[2024-12-06 11:34:00.532027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:27.757 [2024-12-06 11:34:00.532066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.532080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:27.757 [2024-12-06 11:34:00.532088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.532356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:27.757 [2024-12-06 11:34:00.532371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.532382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:27.757 [2024-12-06 11:34:00.532388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.532649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:27.757 [2024-12-06 11:34:00.532659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.532671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:27.757 [2024-12-06 11:34:00.532678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:27.757 [2024-12-06 11:34:00.532945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.532957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:27.757 [2024-12-06 11:34:00.532963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:27.757 passed 00:32:27.757 Test: blockdev nvme passthru rw ...passed 00:32:27.757 Test: blockdev nvme passthru vendor specific ...[2024-12-06 11:34:00.615399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:27.757 [2024-12-06 11:34:00.615417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.615524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:27.757 [2024-12-06 11:34:00.615534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.615633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:27.757 [2024-12-06 11:34:00.615642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:27.757 [2024-12-06 11:34:00.615743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:27.757 [2024-12-06 11:34:00.615753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:27.757 passed 00:32:27.757 Test: blockdev nvme admin passthru ...passed 00:32:27.757 Test: blockdev copy ...passed 00:32:27.757 00:32:27.757 Run Summary: Type Total Ran Passed Failed Inactive 00:32:27.757 suites 1 1 n/a 0 0 00:32:27.757 tests 23 23 23 0 0 00:32:27.757 asserts 152 152 152 0 n/a 00:32:27.757 00:32:27.757 Elapsed time = 1.171 seconds 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:28.015 rmmod nvme_tcp 00:32:28.015 rmmod nvme_fabrics 00:32:28.015 rmmod nvme_keyring 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1967976 ']' 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1967976 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1967976 ']' 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1967976 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967976 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967976' 00:32:28.015 killing process with pid 1967976 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1967976 00:32:28.015 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1967976 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.273 11:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.804 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:30.804 00:32:30.804 real 0m10.622s 00:32:30.804 user 0m8.681s 00:32:30.804 sys 0m5.296s 00:32:30.804 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.804 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:30.804 ************************************ 00:32:30.804 END TEST nvmf_bdevio 00:32:30.804 ************************************ 00:32:30.804 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:30.804 00:32:30.804 real 4m36.663s 00:32:30.804 user 9m22.280s 00:32:30.804 sys 1m53.656s 00:32:30.804 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.804 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:30.804 ************************************ 00:32:30.804 END TEST nvmf_target_core_interrupt_mode 00:32:30.804 ************************************ 00:32:30.804 11:34:03 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:30.804 11:34:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:30.805 11:34:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.805 11:34:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:30.805 ************************************ 00:32:30.805 START TEST nvmf_interrupt 00:32:30.805 ************************************ 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:30.805 * Looking for test storage... 00:32:30.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:30.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.805 --rc genhtml_branch_coverage=1 00:32:30.805 --rc genhtml_function_coverage=1 00:32:30.805 --rc genhtml_legend=1 00:32:30.805 --rc geninfo_all_blocks=1 00:32:30.805 --rc geninfo_unexecuted_blocks=1 00:32:30.805 00:32:30.805 ' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:30.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.805 --rc genhtml_branch_coverage=1 00:32:30.805 --rc genhtml_function_coverage=1 00:32:30.805 --rc genhtml_legend=1 00:32:30.805 --rc geninfo_all_blocks=1 00:32:30.805 --rc geninfo_unexecuted_blocks=1 00:32:30.805 00:32:30.805 ' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:30.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.805 --rc genhtml_branch_coverage=1 00:32:30.805 --rc genhtml_function_coverage=1 00:32:30.805 --rc genhtml_legend=1 00:32:30.805 --rc geninfo_all_blocks=1 00:32:30.805 --rc geninfo_unexecuted_blocks=1 00:32:30.805 00:32:30.805 ' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:30.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.805 --rc genhtml_branch_coverage=1 00:32:30.805 --rc genhtml_function_coverage=1 00:32:30.805 --rc genhtml_legend=1 00:32:30.805 --rc geninfo_all_blocks=1 00:32:30.805 --rc geninfo_unexecuted_blocks=1 00:32:30.805 00:32:30.805 ' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:30.805 11:34:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:37.369 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:37.370 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:37.370 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:37.370 Found net devices under 0000:af:00.0: cvl_0_0 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:37.370 Found net devices under 0000:af:00.1: cvl_0_1 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:37.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:32:37.370 00:32:37.370 --- 10.0.0.2 ping statistics --- 00:32:37.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.370 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:37.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:32:37.370 00:32:37.370 --- 10.0.0.1 ping statistics --- 00:32:37.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.370 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1972007 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1972007 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1972007 ']' 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.370 11:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.371 11:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:37.371 [2024-12-06 11:34:09.451056] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:37.371 [2024-12-06 11:34:09.452023] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:32:37.371 [2024-12-06 11:34:09.452069] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.371 [2024-12-06 11:34:09.530307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:37.371 [2024-12-06 11:34:09.569606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.371 [2024-12-06 11:34:09.569640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.371 [2024-12-06 11:34:09.569649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:37.371 [2024-12-06 11:34:09.569655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:37.371 [2024-12-06 11:34:09.569659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.371 [2024-12-06 11:34:09.570775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.371 [2024-12-06 11:34:09.570776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.371 [2024-12-06 11:34:09.637891] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:37.371 [2024-12-06 11:34:09.638346] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:37.371 [2024-12-06 11:34:09.638596] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:37.371 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:37.371 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:37.371 11:34:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.371 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:37.371 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:37.631 5000+0 records in 00:32:37.631 5000+0 records out 00:32:37.631 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0178207 s, 575 MB/s 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:37.631 AIO0 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:37.631 [2024-12-06 11:34:10.379594] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:37.631 [2024-12-06 11:34:10.419949] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1972007 0 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1972007 0 idle 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1972007 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1972007 -w 256 00:32:37.631 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1972007 root 20 0 128.2g 43904 33152 S 0.0 0.0 0:00.25 reactor_0' 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1972007 root 20 0 128.2g 43904 33152 S 0.0 0.0 0:00.25 reactor_0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1972007 1 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1972007 1 idle 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1972007 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1972007 -w 256 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1972011 root 20 0 128.2g 43904 33152 S 0.0 0.0 0:00.00 reactor_1' 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1972011 root 20 0 128.2g 43904 33152 S 0.0 0.0 0:00.00 reactor_1 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1972302 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1972007 0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1972007 0 busy 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1972007 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1972007 -w 256 00:32:37.891 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1972007 root 20 0 128.2g 44800 33152 R 73.3 0.0 0:00.37 reactor_0' 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1972007 root 20 0 128.2g 44800 33152 R 73.3 0.0 0:00.37 reactor_0 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1972007 1 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1972007 1 busy 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1972007 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1972007 -w 256 00:32:38.151 11:34:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1972011 root 20 0 128.2g 44800 33152 R 99.9 0.0 0:00.24 reactor_1' 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1972011 root 20 0 128.2g 44800 33152 R 99.9 0.0 0:00.24 reactor_1 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:38.408 11:34:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1972302 00:32:48.374 Initializing NVMe Controllers 00:32:48.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:48.374 Controller IO queue size 256, less than required. 00:32:48.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:48.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:48.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:48.374 Initialization complete. Launching workers. 00:32:48.374 ======================================================== 00:32:48.374 Latency(us) 00:32:48.374 Device Information : IOPS MiB/s Average min max 00:32:48.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 17598.00 68.74 14554.67 2989.86 31121.22 00:32:48.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 17769.40 69.41 14411.29 7241.42 27346.01 00:32:48.374 ======================================================== 00:32:48.374 Total : 35367.39 138.15 14482.63 2989.86 31121.22 00:32:48.374 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1972007 0 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1972007 0 idle 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1972007 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1972007 -w 256 00:32:48.374 11:34:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1972007 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:20.24 reactor_0' 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1972007 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:20.24 reactor_0 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1972007 1 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1972007 1 idle 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1972007 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1972007 -w 256 00:32:48.374 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1972011 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:10.00 reactor_1' 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1972011 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:10.00 reactor_1 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:48.634 11:34:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:49.203 11:34:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:49.203 11:34:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:49.203 11:34:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:49.203 11:34:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:49.203 11:34:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1972007 0 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1972007 0 idle 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1972007 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1972007 -w 256 00:32:51.108 11:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1972007 root 20 0 128.2g 75264 33152 S 0.0 0.1 0:20.55 reactor_0' 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1972007 root 20 0 128.2g 75264 33152 S 0.0 0.1 0:20.55 reactor_0 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1972007 1 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1972007 1 idle 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1972007 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1972007 -w 256 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1972011 root 20 0 128.2g 75264 33152 S 0.0 0.1 0:10.11 reactor_1' 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1972011 root 20 0 128.2g 75264 33152 S 0.0 0.1 0:10.11 reactor_1 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:51.368 11:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:51.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.628 rmmod nvme_tcp 00:32:51.628 rmmod nvme_fabrics 00:32:51.628 rmmod nvme_keyring 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1972007 ']' 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1972007 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1972007 ']' 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1972007 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972007 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972007' 00:32:51.628 killing process with pid 1972007 00:32:51.628 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1972007 00:32:51.629 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1972007 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:51.888 11:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.440 11:34:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:54.440 00:32:54.440 real 0m23.546s 00:32:54.440 user 0m39.786s 00:32:54.440 sys 0m8.548s 00:32:54.440 11:34:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.440 11:34:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.440 ************************************ 00:32:54.440 END TEST nvmf_interrupt 00:32:54.440 ************************************ 00:32:54.440 00:32:54.440 real 27m45.287s 00:32:54.440 user 57m51.343s 00:32:54.440 sys 9m27.274s 00:32:54.440 11:34:26 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.440 11:34:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:54.440 ************************************ 00:32:54.440 END TEST nvmf_tcp 00:32:54.440 ************************************ 00:32:54.440 11:34:26 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:54.440 11:34:26 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:54.440 11:34:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:54.440 11:34:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.440 11:34:26 -- common/autotest_common.sh@10 -- # set +x 00:32:54.440 ************************************ 00:32:54.440 START TEST spdkcli_nvmf_tcp 00:32:54.440 ************************************ 00:32:54.440 11:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:54.440 * Looking for test storage... 00:32:54.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.440 --rc genhtml_branch_coverage=1 00:32:54.440 --rc genhtml_function_coverage=1 00:32:54.440 --rc genhtml_legend=1 00:32:54.440 --rc geninfo_all_blocks=1 00:32:54.440 --rc geninfo_unexecuted_blocks=1 00:32:54.440 00:32:54.440 ' 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.440 --rc genhtml_branch_coverage=1 00:32:54.440 --rc genhtml_function_coverage=1 00:32:54.440 --rc genhtml_legend=1 00:32:54.440 --rc geninfo_all_blocks=1 00:32:54.440 --rc geninfo_unexecuted_blocks=1 00:32:54.440 00:32:54.440 ' 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.440 --rc genhtml_branch_coverage=1 00:32:54.440 --rc genhtml_function_coverage=1 00:32:54.440 --rc genhtml_legend=1 00:32:54.440 --rc geninfo_all_blocks=1 00:32:54.440 --rc geninfo_unexecuted_blocks=1 00:32:54.440 00:32:54.440 ' 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.440 --rc genhtml_branch_coverage=1 00:32:54.440 --rc genhtml_function_coverage=1 00:32:54.440 --rc genhtml_legend=1 00:32:54.440 --rc geninfo_all_blocks=1 00:32:54.440 --rc geninfo_unexecuted_blocks=1 00:32:54.440 00:32:54.440 ' 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:54.440 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:54.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1975277 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1975277 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1975277 ']' 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.441 11:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:54.441 [2024-12-06 11:34:27.205452] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:32:54.441 [2024-12-06 11:34:27.205502] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975277 ] 00:32:54.441 [2024-12-06 11:34:27.279760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:54.441 [2024-12-06 11:34:27.321473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.441 [2024-12-06 11:34:27.321475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.378 11:34:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:55.379 11:34:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:55.379 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:55.379 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:55.379 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:55.379 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:55.379 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:55.379 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:55.379 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:55.379 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:55.379 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:55.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:55.379 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:55.379 ' 00:32:57.914 [2024-12-06 11:34:30.728010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:59.291 [2024-12-06 11:34:32.072483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:01.823 [2024-12-06 11:34:34.556027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:04.355 [2024-12-06 11:34:36.718663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:05.733 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:05.733 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:05.733 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:05.733 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:05.733 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:05.733 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:05.733 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:05.733 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:05.733 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:05.733 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:05.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:05.733 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:05.733 11:34:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:05.733 11:34:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:05.733 11:34:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:05.733 11:34:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:05.733 11:34:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:05.733 11:34:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:05.733 11:34:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:05.733 11:34:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:05.993 11:34:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:06.251 11:34:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:06.251 11:34:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:06.251 11:34:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:06.251 11:34:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:06.251 11:34:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:06.251 11:34:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:06.251 11:34:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:06.251 11:34:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:06.251 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:06.251 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:06.251 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:06.251 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:06.251 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:06.251 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:06.251 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:06.251 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:06.251 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:06.251 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:06.251 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:06.251 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:06.251 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:06.251 ' 00:33:12.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:12.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:12.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:12.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:12.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:12.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:12.819 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:12.819 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:12.819 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:12.819 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:12.819 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:12.819 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:12.819 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:12.819 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1975277 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1975277 ']' 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1975277 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975277 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975277' 00:33:12.819 killing process with pid 1975277 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1975277 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1975277 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1975277 ']' 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1975277 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1975277 ']' 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1975277 00:33:12.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1975277) - No such process 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1975277 is not found' 00:33:12.819 Process with pid 1975277 is not found 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:12.819 00:33:12.819 real 0m17.894s 00:33:12.819 user 0m39.439s 00:33:12.819 sys 0m0.823s 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:12.819 11:34:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:12.819 ************************************ 00:33:12.819 END TEST spdkcli_nvmf_tcp 00:33:12.819 ************************************ 00:33:12.819 11:34:44 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:12.819 11:34:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:12.819 11:34:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:12.819 11:34:44 -- common/autotest_common.sh@10 -- # set +x 00:33:12.819 ************************************ 00:33:12.819 START TEST nvmf_identify_passthru 00:33:12.819 ************************************ 00:33:12.819 11:34:44 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:12.819 * Looking for test storage... 00:33:12.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.819 11:34:44 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:12.819 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:12.819 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:12.819 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:12.820 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.820 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:12.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.820 --rc genhtml_branch_coverage=1 00:33:12.820 --rc genhtml_function_coverage=1 00:33:12.820 --rc genhtml_legend=1 00:33:12.820 --rc geninfo_all_blocks=1 00:33:12.820 --rc geninfo_unexecuted_blocks=1 00:33:12.820 00:33:12.820 ' 00:33:12.820 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:12.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.820 --rc genhtml_branch_coverage=1 00:33:12.820 --rc genhtml_function_coverage=1 00:33:12.820 --rc genhtml_legend=1 00:33:12.820 --rc geninfo_all_blocks=1 00:33:12.820 --rc geninfo_unexecuted_blocks=1 00:33:12.820 00:33:12.820 ' 00:33:12.820 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:12.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.820 --rc genhtml_branch_coverage=1 00:33:12.820 --rc genhtml_function_coverage=1 00:33:12.820 --rc genhtml_legend=1 00:33:12.820 --rc geninfo_all_blocks=1 00:33:12.820 --rc geninfo_unexecuted_blocks=1 00:33:12.820 00:33:12.820 ' 00:33:12.820 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:12.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.820 --rc genhtml_branch_coverage=1 00:33:12.820 --rc genhtml_function_coverage=1 00:33:12.820 --rc genhtml_legend=1 00:33:12.820 --rc geninfo_all_blocks=1 00:33:12.820 --rc geninfo_unexecuted_blocks=1 00:33:12.820 00:33:12.820 ' 00:33:12.820 11:34:45 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:12.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.820 11:34:45 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.820 11:34:45 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:12.820 11:34:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.820 11:34:45 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.820 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:12.820 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:12.820 11:34:45 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.820 11:34:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:18.219 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:18.219 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.219 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:18.220 Found net devices under 0000:af:00.0: cvl_0_0 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:18.220 Found net devices under 0000:af:00.1: cvl_0_1 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.220 11:34:50 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:33:18.220 00:33:18.220 --- 10.0.0.2 ping statistics --- 00:33:18.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.220 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:33:18.220 00:33:18.220 --- 10.0.0.1 ping statistics --- 00:33:18.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.220 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:18.220 11:34:51 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:18.220 11:34:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.220 11:34:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:18.220 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:18.478 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:18.478 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:33:18.478 11:34:51 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:86:00.0 00:33:18.479 11:34:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:33:18.479 11:34:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:33:18.479 11:34:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:33:18.479 11:34:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:18.479 11:34:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:22.670 11:34:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:33:22.670 11:34:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:33:22.670 11:34:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:22.670 11:34:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:26.892 11:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:26.892 11:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:26.892 11:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:26.892 11:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1983081 00:33:26.892 11:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:26.892 11:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:26.892 11:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1983081 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1983081 ']' 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.892 11:34:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:26.892 [2024-12-06 11:34:59.789168] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:33:26.892 [2024-12-06 11:34:59.789217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.150 [2024-12-06 11:34:59.864407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:27.150 [2024-12-06 11:34:59.905399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.150 [2024-12-06 11:34:59.905434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.150 [2024-12-06 11:34:59.905441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.150 [2024-12-06 11:34:59.905446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.150 [2024-12-06 11:34:59.905451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.150 [2024-12-06 11:34:59.906849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.150 [2024-12-06 11:34:59.906964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:27.150 [2024-12-06 11:34:59.907088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.150 [2024-12-06 11:34:59.907088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:27.719 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.719 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:27.720 11:35:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:27.720 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.720 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.720 INFO: Log level set to 20 00:33:27.720 INFO: Requests: 00:33:27.720 { 00:33:27.720 "jsonrpc": "2.0", 00:33:27.720 "method": "nvmf_set_config", 00:33:27.720 "id": 1, 00:33:27.720 "params": { 00:33:27.720 "admin_cmd_passthru": { 00:33:27.720 "identify_ctrlr": true 00:33:27.720 } 00:33:27.720 } 00:33:27.720 } 00:33:27.720 00:33:27.720 INFO: response: 00:33:27.720 { 00:33:27.720 "jsonrpc": "2.0", 00:33:27.720 "id": 1, 00:33:27.720 "result": true 00:33:27.720 } 00:33:27.720 00:33:27.720 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.720 11:35:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:27.720 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.720 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.720 INFO: Setting log level to 20 00:33:27.720 INFO: Setting log level to 20 00:33:27.720 INFO: Log level set to 20 00:33:27.720 INFO: Log level set to 20 00:33:27.720 INFO: Requests: 00:33:27.720 { 00:33:27.720 "jsonrpc": "2.0", 00:33:27.720 "method": "framework_start_init", 00:33:27.720 "id": 1 00:33:27.720 } 00:33:27.720 00:33:27.720 INFO: Requests: 00:33:27.720 { 00:33:27.720 "jsonrpc": "2.0", 00:33:27.720 "method": "framework_start_init", 00:33:27.720 "id": 1 00:33:27.720 } 00:33:27.720 00:33:27.979 [2024-12-06 11:35:00.687273] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:27.979 INFO: response: 00:33:27.979 { 00:33:27.979 "jsonrpc": "2.0", 00:33:27.979 "id": 1, 00:33:27.979 "result": true 00:33:27.979 } 00:33:27.979 00:33:27.979 INFO: response: 00:33:27.979 { 00:33:27.979 "jsonrpc": "2.0", 00:33:27.979 "id": 1, 00:33:27.979 "result": true 00:33:27.979 } 00:33:27.979 00:33:27.979 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.979 11:35:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:27.979 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.979 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.979 INFO: Setting log level to 40 00:33:27.979 INFO: Setting log level to 40 00:33:27.979 INFO: Setting log level to 40 00:33:27.979 [2024-12-06 11:35:00.700509] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.979 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.979 11:35:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:27.979 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:27.979 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.979 11:35:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:33:27.979 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.979 11:35:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.263 Nvme0n1 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.264 [2024-12-06 11:35:03.616917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.264 [ 00:33:31.264 { 00:33:31.264 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:31.264 "subtype": "Discovery", 00:33:31.264 "listen_addresses": [], 00:33:31.264 "allow_any_host": true, 00:33:31.264 "hosts": [] 00:33:31.264 }, 00:33:31.264 { 00:33:31.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:31.264 "subtype": "NVMe", 00:33:31.264 "listen_addresses": [ 00:33:31.264 { 00:33:31.264 "trtype": "TCP", 00:33:31.264 "adrfam": "IPv4", 00:33:31.264 "traddr": "10.0.0.2", 00:33:31.264 "trsvcid": "4420" 00:33:31.264 } 00:33:31.264 ], 00:33:31.264 "allow_any_host": true, 00:33:31.264 "hosts": [], 00:33:31.264 "serial_number": "SPDK00000000000001", 00:33:31.264 "model_number": "SPDK bdev Controller", 00:33:31.264 "max_namespaces": 1, 00:33:31.264 "min_cntlid": 1, 00:33:31.264 "max_cntlid": 65519, 00:33:31.264 "namespaces": [ 00:33:31.264 { 00:33:31.264 "nsid": 1, 00:33:31.264 "bdev_name": "Nvme0n1", 00:33:31.264 "name": "Nvme0n1", 00:33:31.264 "nguid": "551D8E8F123D434AA138A1778342D183", 00:33:31.264 "uuid": "551d8e8f-123d-434a-a138-a1778342d183" 00:33:31.264 } 00:33:31.264 ] 00:33:31.264 } 00:33:31.264 ] 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:31.264 11:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:31.264 rmmod nvme_tcp 00:33:31.264 rmmod nvme_fabrics 00:33:31.264 rmmod nvme_keyring 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1983081 ']' 00:33:31.264 11:35:03 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1983081 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1983081 ']' 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1983081 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:31.264 11:35:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:31.264 11:35:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1983081 00:33:31.264 11:35:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:31.264 11:35:04 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:31.264 11:35:04 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1983081' 00:33:31.264 killing process with pid 1983081 00:33:31.264 11:35:04 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1983081 00:33:31.264 11:35:04 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1983081 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:33.166 11:35:05 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.166 11:35:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:33.166 11:35:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.073 11:35:07 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:35.073 00:33:35.073 real 0m22.780s 00:33:35.073 user 0m29.728s 00:33:35.073 sys 0m6.308s 00:33:35.073 11:35:07 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:35.073 11:35:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.073 ************************************ 00:33:35.073 END TEST nvmf_identify_passthru 00:33:35.073 ************************************ 00:33:35.073 11:35:07 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:35.073 11:35:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:35.073 11:35:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:35.073 11:35:07 -- common/autotest_common.sh@10 -- # set +x 00:33:35.073 ************************************ 00:33:35.073 START TEST nvmf_dif 00:33:35.073 ************************************ 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:35.073 * Looking for test storage... 00:33:35.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:35.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.073 --rc genhtml_branch_coverage=1 00:33:35.073 --rc genhtml_function_coverage=1 00:33:35.073 --rc genhtml_legend=1 00:33:35.073 --rc geninfo_all_blocks=1 00:33:35.073 --rc geninfo_unexecuted_blocks=1 00:33:35.073 00:33:35.073 ' 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:35.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.073 --rc genhtml_branch_coverage=1 00:33:35.073 --rc genhtml_function_coverage=1 00:33:35.073 --rc genhtml_legend=1 00:33:35.073 --rc geninfo_all_blocks=1 00:33:35.073 --rc geninfo_unexecuted_blocks=1 00:33:35.073 00:33:35.073 ' 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:35.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.073 --rc genhtml_branch_coverage=1 00:33:35.073 --rc genhtml_function_coverage=1 00:33:35.073 --rc genhtml_legend=1 00:33:35.073 --rc geninfo_all_blocks=1 00:33:35.073 --rc geninfo_unexecuted_blocks=1 00:33:35.073 00:33:35.073 ' 00:33:35.073 11:35:07 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:35.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.073 --rc genhtml_branch_coverage=1 00:33:35.073 --rc genhtml_function_coverage=1 00:33:35.073 --rc genhtml_legend=1 00:33:35.073 --rc geninfo_all_blocks=1 00:33:35.073 --rc geninfo_unexecuted_blocks=1 00:33:35.073 00:33:35.073 ' 00:33:35.073 11:35:07 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.073 11:35:07 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.073 11:35:07 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.073 11:35:07 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.073 11:35:07 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.074 11:35:07 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.074 11:35:07 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:35.074 11:35:07 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:35.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:35.074 11:35:07 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:35.074 11:35:07 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:35.074 11:35:07 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:35.074 11:35:07 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:35.074 11:35:07 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.074 11:35:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:35.074 11:35:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:35.074 11:35:07 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:35.074 11:35:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:41.645 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:41.645 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:41.645 11:35:13 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:41.646 Found net devices under 0000:af:00.0: cvl_0_0 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:41.646 Found net devices under 0000:af:00.1: cvl_0_1 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:41.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:33:41.646 00:33:41.646 --- 10.0.0.2 ping statistics --- 00:33:41.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.646 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:41.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:33:41.646 00:33:41.646 --- 10.0.0.1 ping statistics --- 00:33:41.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.646 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:41.646 11:35:13 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:44.181 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:44.181 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:44.181 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:44.181 11:35:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:44.181 11:35:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:44.181 11:35:16 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:44.181 11:35:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1988918 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1988918 00:33:44.181 11:35:16 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:44.181 11:35:16 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1988918 ']' 00:33:44.181 11:35:16 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.181 11:35:16 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.181 11:35:16 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.181 11:35:16 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.181 11:35:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:44.181 [2024-12-06 11:35:16.843832] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:33:44.181 [2024-12-06 11:35:16.843872] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:44.181 [2024-12-06 11:35:16.918903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.181 [2024-12-06 11:35:16.958323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:44.181 [2024-12-06 11:35:16.958353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:44.181 [2024-12-06 11:35:16.958360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:44.181 [2024-12-06 11:35:16.958366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:44.181 [2024-12-06 11:35:16.958371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:44.181 [2024-12-06 11:35:16.958923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.749 11:35:17 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:44.749 11:35:17 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:44.749 11:35:17 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:44.749 11:35:17 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:44.750 11:35:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:44.750 11:35:17 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:44.750 11:35:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:44.750 11:35:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:44.750 11:35:17 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.750 11:35:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:45.009 [2024-12-06 11:35:17.687002] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.009 11:35:17 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.009 11:35:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:45.009 11:35:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:45.009 11:35:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.009 11:35:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:45.009 ************************************ 00:33:45.009 START TEST fio_dif_1_default 00:33:45.009 ************************************ 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:45.009 bdev_null0 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:45.009 [2024-12-06 11:35:17.755295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:45.009 { 00:33:45.009 "params": { 00:33:45.009 "name": "Nvme$subsystem", 00:33:45.009 "trtype": "$TEST_TRANSPORT", 00:33:45.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:45.009 "adrfam": "ipv4", 00:33:45.009 "trsvcid": "$NVMF_PORT", 00:33:45.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:45.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:45.009 "hdgst": ${hdgst:-false}, 00:33:45.009 "ddgst": ${ddgst:-false} 00:33:45.009 }, 00:33:45.009 "method": "bdev_nvme_attach_controller" 00:33:45.009 } 00:33:45.009 EOF 00:33:45.009 )") 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:45.009 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:45.010 "params": { 00:33:45.010 "name": "Nvme0", 00:33:45.010 "trtype": "tcp", 00:33:45.010 "traddr": "10.0.0.2", 00:33:45.010 "adrfam": "ipv4", 00:33:45.010 "trsvcid": "4420", 00:33:45.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:45.010 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:45.010 "hdgst": false, 00:33:45.010 "ddgst": false 00:33:45.010 }, 00:33:45.010 "method": "bdev_nvme_attach_controller" 00:33:45.010 }' 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:45.010 11:35:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:45.268 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:45.268 fio-3.35 00:33:45.268 Starting 1 thread 00:33:57.475 00:33:57.475 filename0: (groupid=0, jobs=1): err= 0: pid=1989346: Fri Dec 6 11:35:28 2024 00:33:57.475 read: IOPS=190, BW=761KiB/s (779kB/s)(7632KiB/10031msec) 00:33:57.475 slat (nsec): min=5358, max=26926, avg=5741.93, stdev=653.88 00:33:57.475 clat (usec): min=368, max=43330, avg=21012.26, stdev=20568.28 00:33:57.475 lat (usec): min=374, max=43357, avg=21018.00, stdev=20568.24 00:33:57.475 clat percentiles (usec): 00:33:57.475 | 1.00th=[ 375], 5.00th=[ 392], 10.00th=[ 437], 20.00th=[ 453], 00:33:57.475 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[41157], 00:33:57.475 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:57.475 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:33:57.475 | 99.99th=[43254] 00:33:57.475 bw ( KiB/s): min= 704, max= 832, per=100.00%, avg=761.60, stdev=28.62, samples=20 00:33:57.475 iops : min= 176, max= 208, avg=190.40, stdev= 7.16, samples=20 00:33:57.475 lat (usec) : 500=29.14%, 750=20.96% 00:33:57.475 lat (msec) : 50=49.90% 00:33:57.475 cpu : usr=92.69%, sys=7.03%, ctx=51, majf=0, minf=0 00:33:57.475 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.475 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.475 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:57.475 00:33:57.475 Run status group 0 (all jobs): 00:33:57.475 READ: bw=761KiB/s (779kB/s), 761KiB/s-761KiB/s (779kB/s-779kB/s), io=7632KiB (7815kB), run=10031-10031msec 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.475 00:33:57.475 real 0m11.409s 00:33:57.475 user 0m18.758s 00:33:57.475 sys 0m1.079s 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:57.475 ************************************ 00:33:57.475 END TEST fio_dif_1_default 00:33:57.475 ************************************ 00:33:57.475 11:35:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:57.475 11:35:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:57.475 11:35:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:57.475 11:35:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:57.475 ************************************ 00:33:57.475 START TEST fio_dif_1_multi_subsystems 00:33:57.475 ************************************ 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:57.475 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:57.476 bdev_null0 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:57.476 [2024-12-06 11:35:29.236971] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:57.476 bdev_null1 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:57.476 { 00:33:57.476 "params": { 00:33:57.476 "name": "Nvme$subsystem", 00:33:57.476 "trtype": "$TEST_TRANSPORT", 00:33:57.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:57.476 "adrfam": "ipv4", 00:33:57.476 "trsvcid": "$NVMF_PORT", 00:33:57.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:57.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:57.476 "hdgst": ${hdgst:-false}, 00:33:57.476 "ddgst": ${ddgst:-false} 00:33:57.476 }, 00:33:57.476 "method": "bdev_nvme_attach_controller" 00:33:57.476 } 00:33:57.476 EOF 00:33:57.476 )") 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:57.476 { 00:33:57.476 "params": { 00:33:57.476 "name": "Nvme$subsystem", 00:33:57.476 "trtype": "$TEST_TRANSPORT", 00:33:57.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:57.476 "adrfam": "ipv4", 00:33:57.476 "trsvcid": "$NVMF_PORT", 00:33:57.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:57.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:57.476 "hdgst": ${hdgst:-false}, 00:33:57.476 "ddgst": ${ddgst:-false} 00:33:57.476 }, 00:33:57.476 "method": "bdev_nvme_attach_controller" 00:33:57.476 } 00:33:57.476 EOF 00:33:57.476 )") 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:57.476 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:57.476 "params": { 00:33:57.476 "name": "Nvme0", 00:33:57.476 "trtype": "tcp", 00:33:57.476 "traddr": "10.0.0.2", 00:33:57.476 "adrfam": "ipv4", 00:33:57.476 "trsvcid": "4420", 00:33:57.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:57.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:57.476 "hdgst": false, 00:33:57.476 "ddgst": false 00:33:57.476 }, 00:33:57.476 "method": "bdev_nvme_attach_controller" 00:33:57.476 },{ 00:33:57.476 "params": { 00:33:57.476 "name": "Nvme1", 00:33:57.476 "trtype": "tcp", 00:33:57.476 "traddr": "10.0.0.2", 00:33:57.477 "adrfam": "ipv4", 00:33:57.477 "trsvcid": "4420", 00:33:57.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:57.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:57.477 "hdgst": false, 00:33:57.477 "ddgst": false 00:33:57.477 }, 00:33:57.477 "method": "bdev_nvme_attach_controller" 00:33:57.477 }' 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:57.477 11:35:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.477 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:57.477 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:57.477 fio-3.35 00:33:57.477 Starting 2 threads 00:34:09.684 00:34:09.684 filename0: (groupid=0, jobs=1): err= 0: pid=1991448: Fri Dec 6 11:35:40 2024 00:34:09.684 read: IOPS=191, BW=766KiB/s (785kB/s)(7680KiB/10021msec) 00:34:09.684 slat (nsec): min=5535, max=42786, avg=6624.01, stdev=2352.33 00:34:09.684 clat (usec): min=345, max=42566, avg=20857.55, stdev=20386.95 00:34:09.684 lat (usec): min=351, max=42572, avg=20864.17, stdev=20386.26 00:34:09.684 clat percentiles (usec): 00:34:09.684 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 375], 00:34:09.684 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[40633], 60.00th=[40633], 00:34:09.684 | 70.00th=[40633], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:09.684 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:09.684 | 99.99th=[42730] 00:34:09.684 bw ( KiB/s): min= 672, max= 832, per=66.45%, avg=766.40, stdev=33.60, samples=20 00:34:09.684 iops : min= 168, max= 208, avg=191.60, stdev= 8.40, samples=20 00:34:09.684 lat (usec) : 500=46.67%, 750=2.92%, 1000=0.16% 00:34:09.684 lat (msec) : 2=0.05%, 50=50.21% 00:34:09.684 cpu : usr=96.93%, sys=2.79%, ctx=14, majf=0, minf=132 00:34:09.684 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.684 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.684 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:09.684 filename1: (groupid=0, jobs=1): err= 0: pid=1991449: Fri Dec 6 11:35:40 2024 00:34:09.684 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10009msec) 00:34:09.684 slat (nsec): min=5547, max=36524, avg=7388.41, stdev=3070.73 00:34:09.684 clat (usec): min=40851, max=42092, avg=41336.27, stdev=475.39 00:34:09.684 lat (usec): min=40857, max=42103, avg=41343.66, stdev=475.43 00:34:09.684 clat percentiles (usec): 00:34:09.684 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:09.684 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:09.684 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:09.684 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:09.684 | 99.99th=[42206] 00:34:09.684 bw ( KiB/s): min= 352, max= 416, per=33.40%, avg=385.60, stdev=12.61, samples=20 00:34:09.684 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:34:09.684 lat (msec) : 50=100.00% 00:34:09.684 cpu : usr=96.89%, sys=2.83%, ctx=27, majf=0, minf=146 00:34:09.684 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.684 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.684 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:09.684 00:34:09.684 Run status group 0 (all jobs): 00:34:09.684 READ: bw=1153KiB/s (1180kB/s), 387KiB/s-766KiB/s (396kB/s-785kB/s), io=11.3MiB (11.8MB), run=10009-10021msec 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.684 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.685 00:34:09.685 real 0m11.534s 00:34:09.685 user 0m28.680s 00:34:09.685 sys 0m0.925s 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.685 11:35:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:09.685 ************************************ 00:34:09.685 END TEST fio_dif_1_multi_subsystems 00:34:09.685 ************************************ 00:34:09.685 11:35:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:09.685 11:35:40 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:09.685 11:35:40 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.685 11:35:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:09.685 ************************************ 00:34:09.685 START TEST fio_dif_rand_params 00:34:09.685 ************************************ 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:09.685 bdev_null0 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:09.685 [2024-12-06 11:35:40.851222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:09.685 { 00:34:09.685 "params": { 00:34:09.685 "name": "Nvme$subsystem", 00:34:09.685 "trtype": "$TEST_TRANSPORT", 00:34:09.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:09.685 "adrfam": "ipv4", 00:34:09.685 "trsvcid": "$NVMF_PORT", 00:34:09.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:09.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:09.685 "hdgst": ${hdgst:-false}, 00:34:09.685 "ddgst": ${ddgst:-false} 00:34:09.685 }, 00:34:09.685 "method": "bdev_nvme_attach_controller" 00:34:09.685 } 00:34:09.685 EOF 00:34:09.685 )") 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:09.685 "params": { 00:34:09.685 "name": "Nvme0", 00:34:09.685 "trtype": "tcp", 00:34:09.685 "traddr": "10.0.0.2", 00:34:09.685 "adrfam": "ipv4", 00:34:09.685 "trsvcid": "4420", 00:34:09.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:09.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:09.685 "hdgst": false, 00:34:09.685 "ddgst": false 00:34:09.685 }, 00:34:09.685 "method": "bdev_nvme_attach_controller" 00:34:09.685 }' 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:09.685 11:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.685 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:09.685 ... 00:34:09.685 fio-3.35 00:34:09.685 Starting 3 threads 00:34:13.875 00:34:13.875 filename0: (groupid=0, jobs=1): err= 0: pid=1993579: Fri Dec 6 11:35:46 2024 00:34:13.875 read: IOPS=339, BW=42.5MiB/s (44.5MB/s)(212MiB/5002msec) 00:34:13.875 slat (nsec): min=5652, max=26879, avg=10579.26, stdev=2036.28 00:34:13.875 clat (usec): min=3252, max=50205, avg=8819.94, stdev=3308.76 00:34:13.875 lat (usec): min=3262, max=50232, avg=8830.52, stdev=3309.10 00:34:13.875 clat percentiles (usec): 00:34:13.875 | 1.00th=[ 3490], 5.00th=[ 5669], 10.00th=[ 6652], 20.00th=[ 7570], 00:34:13.875 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:34:13.875 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[10945], 00:34:13.875 | 99.00th=[12125], 99.50th=[46924], 99.90th=[49546], 99.95th=[50070], 00:34:13.875 | 99.99th=[50070] 00:34:13.875 bw ( KiB/s): min=39424, max=48128, per=33.69%, avg=43443.20, stdev=2956.16, samples=10 00:34:13.875 iops : min= 308, max= 376, avg=339.40, stdev=23.09, samples=10 00:34:13.875 lat (msec) : 4=3.18%, 10=78.22%, 20=18.07%, 50=0.47%, 100=0.06% 00:34:13.875 cpu : usr=94.36%, sys=5.34%, ctx=9, majf=0, minf=19 00:34:13.875 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.875 issued rwts: total=1699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.875 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:13.875 filename0: (groupid=0, jobs=1): err= 0: pid=1993580: Fri Dec 6 11:35:46 2024 00:34:13.875 read: IOPS=358, BW=44.8MiB/s (47.0MB/s)(226MiB/5047msec) 00:34:13.875 slat (nsec): min=5623, max=24959, avg=10547.09, stdev=1972.54 00:34:13.875 clat (usec): min=3463, max=49480, avg=8338.92, stdev=3321.02 00:34:13.875 lat (usec): min=3469, max=49492, avg=8349.47, stdev=3321.03 00:34:13.875 clat percentiles (usec): 00:34:13.875 | 1.00th=[ 5080], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7242], 00:34:13.875 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8455], 00:34:13.875 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[ 9896], 00:34:13.875 | 99.00th=[11207], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:34:13.875 | 99.99th=[49546] 00:34:13.875 bw ( KiB/s): min=39680, max=51712, per=35.84%, avg=46208.00, stdev=2993.36, samples=10 00:34:13.875 iops : min= 310, max= 404, avg=361.00, stdev=23.39, samples=10 00:34:13.875 lat (msec) : 4=0.11%, 10=95.52%, 20=3.76%, 50=0.61% 00:34:13.875 cpu : usr=94.31%, sys=5.39%, ctx=11, majf=0, minf=89 00:34:13.875 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.875 issued rwts: total=1808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.875 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:13.875 filename0: (groupid=0, jobs=1): err= 0: pid=1993581: Fri Dec 6 11:35:46 2024 00:34:13.875 read: IOPS=312, BW=39.1MiB/s (41.0MB/s)(197MiB/5043msec) 00:34:13.875 slat (nsec): min=5641, max=28088, avg=10632.46, stdev=2190.49 00:34:13.875 clat (usec): min=2672, max=50575, avg=9554.97, stdev=5757.54 00:34:13.875 lat (usec): min=2679, max=50587, avg=9565.60, stdev=5757.44 00:34:13.875 clat percentiles (usec): 00:34:13.875 | 1.00th=[ 3392], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 7898], 00:34:13.875 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:34:13.875 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[10945], 00:34:13.875 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:34:13.875 | 99.99th=[50594] 00:34:13.875 bw ( KiB/s): min=28672, max=44288, per=31.27%, avg=40320.00, stdev=4669.61, samples=10 00:34:13.875 iops : min= 224, max= 346, avg=315.00, stdev=36.48, samples=10 00:34:13.875 lat (msec) : 4=1.90%, 10=79.90%, 20=16.17%, 50=1.84%, 100=0.19% 00:34:13.875 cpu : usr=94.39%, sys=5.32%, ctx=10, majf=0, minf=37 00:34:13.875 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.875 issued rwts: total=1577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.875 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:13.875 00:34:13.875 Run status group 0 (all jobs): 00:34:13.875 READ: bw=126MiB/s (132MB/s), 39.1MiB/s-44.8MiB/s (41.0MB/s-47.0MB/s), io=636MiB (666MB), run=5002-5047msec 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 bdev_null0 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 [2024-12-06 11:35:46.999271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 bdev_null1 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 bdev_null2 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.136 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:14.397 { 00:34:14.397 "params": { 00:34:14.397 "name": "Nvme$subsystem", 00:34:14.397 "trtype": "$TEST_TRANSPORT", 00:34:14.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.397 "adrfam": "ipv4", 00:34:14.397 "trsvcid": "$NVMF_PORT", 00:34:14.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.397 "hdgst": ${hdgst:-false}, 00:34:14.397 "ddgst": ${ddgst:-false} 00:34:14.397 }, 00:34:14.397 "method": "bdev_nvme_attach_controller" 00:34:14.397 } 00:34:14.397 EOF 00:34:14.397 )") 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:14.397 { 00:34:14.397 "params": { 00:34:14.397 "name": "Nvme$subsystem", 00:34:14.397 "trtype": "$TEST_TRANSPORT", 00:34:14.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.397 "adrfam": "ipv4", 00:34:14.397 "trsvcid": "$NVMF_PORT", 00:34:14.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.397 "hdgst": ${hdgst:-false}, 00:34:14.397 "ddgst": ${ddgst:-false} 00:34:14.397 }, 00:34:14.397 "method": "bdev_nvme_attach_controller" 00:34:14.397 } 00:34:14.397 EOF 00:34:14.397 )") 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:14.397 { 00:34:14.397 "params": { 00:34:14.397 "name": "Nvme$subsystem", 00:34:14.397 "trtype": "$TEST_TRANSPORT", 00:34:14.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.397 "adrfam": "ipv4", 00:34:14.397 "trsvcid": "$NVMF_PORT", 00:34:14.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.397 "hdgst": ${hdgst:-false}, 00:34:14.397 "ddgst": ${ddgst:-false} 00:34:14.397 }, 00:34:14.397 "method": "bdev_nvme_attach_controller" 00:34:14.397 } 00:34:14.397 EOF 00:34:14.397 )") 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:14.397 "params": { 00:34:14.397 "name": "Nvme0", 00:34:14.397 "trtype": "tcp", 00:34:14.397 "traddr": "10.0.0.2", 00:34:14.397 "adrfam": "ipv4", 00:34:14.397 "trsvcid": "4420", 00:34:14.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:14.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:14.397 "hdgst": false, 00:34:14.397 "ddgst": false 00:34:14.397 }, 00:34:14.397 "method": "bdev_nvme_attach_controller" 00:34:14.397 },{ 00:34:14.397 "params": { 00:34:14.397 "name": "Nvme1", 00:34:14.397 "trtype": "tcp", 00:34:14.397 "traddr": "10.0.0.2", 00:34:14.397 "adrfam": "ipv4", 00:34:14.397 "trsvcid": "4420", 00:34:14.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:14.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:14.397 "hdgst": false, 00:34:14.397 "ddgst": false 00:34:14.397 }, 00:34:14.397 "method": "bdev_nvme_attach_controller" 00:34:14.397 },{ 00:34:14.397 "params": { 00:34:14.397 "name": "Nvme2", 00:34:14.397 "trtype": "tcp", 00:34:14.397 "traddr": "10.0.0.2", 00:34:14.397 "adrfam": "ipv4", 00:34:14.397 "trsvcid": "4420", 00:34:14.397 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:14.397 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:14.397 "hdgst": false, 00:34:14.397 "ddgst": false 00:34:14.397 }, 00:34:14.397 "method": "bdev_nvme_attach_controller" 00:34:14.397 }' 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:14.397 11:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.656 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:14.656 ... 00:34:14.656 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:14.656 ... 00:34:14.656 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:14.656 ... 00:34:14.656 fio-3.35 00:34:14.656 Starting 24 threads 00:34:26.856 00:34:26.856 filename0: (groupid=0, jobs=1): err= 0: pid=1994773: Fri Dec 6 11:35:58 2024 00:34:26.856 read: IOPS=565, BW=2263KiB/s (2317kB/s)(22.1MiB/10011msec) 00:34:26.856 slat (nsec): min=6845, max=80739, avg=14412.26, stdev=10400.93 00:34:26.856 clat (usec): min=9529, max=30168, avg=28157.72, stdev=1840.09 00:34:26.856 lat (usec): min=9543, max=30183, avg=28172.14, stdev=1838.64 00:34:26.856 clat percentiles (usec): 00:34:26.856 | 1.00th=[13173], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:26.856 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:26.856 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:34:26.856 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[30278], 00:34:26.856 | 99.99th=[30278] 00:34:26.856 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2263.58, stdev=95.91, samples=19 00:34:26.856 iops : min= 544, max= 640, avg=565.89, stdev=23.98, samples=19 00:34:26.856 lat (msec) : 10=0.28%, 20=0.85%, 50=98.87% 00:34:26.856 cpu : usr=98.58%, sys=1.02%, ctx=12, majf=0, minf=9 00:34:26.856 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:26.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.856 filename0: (groupid=0, jobs=1): err= 0: pid=1994774: Fri Dec 6 11:35:58 2024 00:34:26.856 read: IOPS=561, BW=2245KiB/s (2299kB/s)(21.9MiB/10006msec) 00:34:26.856 slat (nsec): min=3995, max=85048, avg=30505.61, stdev=17551.07 00:34:26.856 clat (usec): min=15017, max=57535, avg=28205.05, stdev=1809.63 00:34:26.856 lat (usec): min=15035, max=57547, avg=28235.56, stdev=1809.40 00:34:26.856 clat percentiles (usec): 00:34:26.856 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:26.856 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.856 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.856 | 99.00th=[28967], 99.50th=[30016], 99.90th=[57410], 99.95th=[57410], 00:34:26.856 | 99.99th=[57410] 00:34:26.856 bw ( KiB/s): min= 2052, max= 2304, per=4.13%, avg=2236.84, stdev=77.78, samples=19 00:34:26.856 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:34:26.856 lat (msec) : 20=0.57%, 50=99.15%, 100=0.28% 00:34:26.856 cpu : usr=98.61%, sys=1.00%, ctx=14, majf=0, minf=9 00:34:26.856 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:26.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.856 filename0: (groupid=0, jobs=1): err= 0: pid=1994775: Fri Dec 6 11:35:58 2024 00:34:26.856 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10012msec) 00:34:26.856 slat (nsec): min=5633, max=91253, avg=22964.85, stdev=7000.80 00:34:26.856 clat (usec): min=13206, max=30368, avg=28232.37, stdev=676.26 00:34:26.856 lat (usec): min=13214, max=30405, avg=28255.33, stdev=676.74 00:34:26.856 clat percentiles (usec): 00:34:26.856 | 1.00th=[27132], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:26.856 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.856 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.856 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30278], 99.95th=[30278], 00:34:26.856 | 99.99th=[30278] 00:34:26.856 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2250.32, stdev=64.68, samples=19 00:34:26.856 iops : min= 544, max= 576, avg=562.58, stdev=16.17, samples=19 00:34:26.856 lat (msec) : 20=0.28%, 50=99.72% 00:34:26.856 cpu : usr=98.62%, sys=0.98%, ctx=14, majf=0, minf=9 00:34:26.856 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.856 filename0: (groupid=0, jobs=1): err= 0: pid=1994776: Fri Dec 6 11:35:58 2024 00:34:26.856 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10013msec) 00:34:26.856 slat (nsec): min=6883, max=81794, avg=19103.05, stdev=7980.16 00:34:26.856 clat (usec): min=17241, max=39102, avg=28285.21, stdev=1168.04 00:34:26.856 lat (usec): min=17258, max=39115, avg=28304.32, stdev=1167.60 00:34:26.856 clat percentiles (usec): 00:34:26.856 | 1.00th=[26346], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:26.856 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:34:26.856 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:34:26.856 | 99.00th=[29230], 99.50th=[30278], 99.90th=[39060], 99.95th=[39060], 00:34:26.856 | 99.99th=[39060] 00:34:26.856 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2250.11, stdev=64.93, samples=19 00:34:26.856 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:34:26.856 lat (msec) : 20=0.73%, 50=99.27% 00:34:26.856 cpu : usr=98.59%, sys=1.01%, ctx=13, majf=0, minf=9 00:34:26.856 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:26.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.856 filename0: (groupid=0, jobs=1): err= 0: pid=1994777: Fri Dec 6 11:35:58 2024 00:34:26.856 read: IOPS=563, BW=2252KiB/s (2306kB/s)(22.0MiB/10002msec) 00:34:26.856 slat (nsec): min=5908, max=80634, avg=29339.52, stdev=12809.77 00:34:26.856 clat (usec): min=16230, max=41382, avg=28144.31, stdev=1016.41 00:34:26.856 lat (usec): min=16237, max=41393, avg=28173.65, stdev=1017.91 00:34:26.856 clat percentiles (usec): 00:34:26.856 | 1.00th=[22414], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:26.856 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.856 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.856 | 99.00th=[28967], 99.50th=[29230], 99.90th=[35914], 99.95th=[35914], 00:34:26.856 | 99.99th=[41157] 00:34:26.856 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2250.11, stdev=64.93, samples=19 00:34:26.856 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:34:26.856 lat (msec) : 20=0.55%, 50=99.45% 00:34:26.856 cpu : usr=98.60%, sys=0.86%, ctx=64, majf=0, minf=9 00:34:26.856 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.856 filename0: (groupid=0, jobs=1): err= 0: pid=1994778: Fri Dec 6 11:35:58 2024 00:34:26.856 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10013msec) 00:34:26.856 slat (nsec): min=7234, max=70938, avg=21285.98, stdev=6060.73 00:34:26.856 clat (usec): min=17187, max=43333, avg=28254.25, stdev=747.00 00:34:26.856 lat (usec): min=17201, max=43346, avg=28275.53, stdev=747.28 00:34:26.856 clat percentiles (usec): 00:34:26.856 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:34:26.856 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:34:26.856 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:34:26.856 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30278], 99.95th=[31327], 00:34:26.856 | 99.99th=[43254] 00:34:26.856 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2250.11, stdev=64.93, samples=19 00:34:26.856 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:34:26.856 lat (msec) : 20=0.36%, 50=99.64% 00:34:26.856 cpu : usr=98.60%, sys=1.00%, ctx=13, majf=0, minf=9 00:34:26.856 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.856 filename0: (groupid=0, jobs=1): err= 0: pid=1994779: Fri Dec 6 11:35:58 2024 00:34:26.856 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10021msec) 00:34:26.856 slat (nsec): min=7275, max=82286, avg=28778.93, stdev=16459.51 00:34:26.856 clat (usec): min=9450, max=36130, avg=28098.81, stdev=1663.04 00:34:26.856 lat (usec): min=9463, max=36160, avg=28127.59, stdev=1662.30 00:34:26.856 clat percentiles (usec): 00:34:26.856 | 1.00th=[20317], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:34:26.856 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:34:26.856 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:34:26.856 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[35390], 00:34:26.856 | 99.99th=[35914] 00:34:26.856 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2259.20, stdev=75.15, samples=20 00:34:26.856 iops : min= 544, max= 608, avg=564.80, stdev=18.79, samples=20 00:34:26.856 lat (msec) : 10=0.25%, 20=0.60%, 50=99.15% 00:34:26.856 cpu : usr=98.47%, sys=1.13%, ctx=14, majf=0, minf=9 00:34:26.856 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:26.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.856 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.856 filename0: (groupid=0, jobs=1): err= 0: pid=1994780: Fri Dec 6 11:35:58 2024 00:34:26.856 read: IOPS=564, BW=2260KiB/s (2314kB/s)(22.1MiB/10009msec) 00:34:26.856 slat (nsec): min=4375, max=77413, avg=20065.20, stdev=14714.37 00:34:26.857 clat (usec): min=13748, max=66666, avg=28245.84, stdev=2642.25 00:34:26.857 lat (usec): min=13756, max=66680, avg=28265.90, stdev=2642.11 00:34:26.857 clat percentiles (usec): 00:34:26.857 | 1.00th=[19268], 5.00th=[23725], 10.00th=[27919], 20.00th=[28181], 00:34:26.857 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:26.857 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[29492], 00:34:26.857 | 99.00th=[35914], 99.50th=[41681], 99.90th=[50070], 99.95th=[50070], 00:34:26.857 | 99.99th=[66847] 00:34:26.857 bw ( KiB/s): min= 2112, max= 2368, per=4.18%, avg=2259.11, stdev=57.13, samples=19 00:34:26.857 iops : min= 528, max= 592, avg=564.74, stdev=14.25, samples=19 00:34:26.857 lat (msec) : 20=1.45%, 50=98.27%, 100=0.28% 00:34:26.857 cpu : usr=98.49%, sys=1.11%, ctx=14, majf=0, minf=9 00:34:26.857 IO depths : 1=0.1%, 2=0.2%, 4=1.4%, 8=80.6%, 16=17.8%, 32=0.0%, >=64=0.0% 00:34:26.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 complete : 0=0.0%, 4=89.5%, 8=9.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 issued rwts: total=5654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.857 filename1: (groupid=0, jobs=1): err= 0: pid=1994781: Fri Dec 6 11:35:58 2024 00:34:26.857 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10014msec) 00:34:26.857 slat (nsec): min=4547, max=67339, avg=22120.97, stdev=7097.94 00:34:26.857 clat (usec): min=17188, max=44415, avg=28240.52, stdev=791.10 00:34:26.857 lat (usec): min=17203, max=44428, avg=28262.64, stdev=791.48 00:34:26.857 clat percentiles (usec): 00:34:26.857 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:26.857 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.857 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.857 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30540], 99.95th=[38536], 00:34:26.857 | 99.99th=[44303] 00:34:26.857 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2250.11, stdev=64.93, samples=19 00:34:26.857 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:34:26.857 lat (msec) : 20=0.36%, 50=99.64% 00:34:26.857 cpu : usr=98.87%, sys=0.74%, ctx=13, majf=0, minf=9 00:34:26.857 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:26.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.857 filename1: (groupid=0, jobs=1): err= 0: pid=1994782: Fri Dec 6 11:35:58 2024 00:34:26.857 read: IOPS=563, BW=2253KiB/s (2307kB/s)(22.0MiB/10001msec) 00:34:26.857 slat (nsec): min=7094, max=78613, avg=35856.29, stdev=18095.97 00:34:26.857 clat (usec): min=15860, max=30188, avg=28074.33, stdev=898.78 00:34:26.857 lat (usec): min=15871, max=30215, avg=28110.18, stdev=901.13 00:34:26.857 clat percentiles (usec): 00:34:26.857 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:34:26.857 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:34:26.857 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.857 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[30278], 00:34:26.857 | 99.99th=[30278] 00:34:26.857 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2250.11, stdev=64.93, samples=19 00:34:26.857 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:34:26.857 lat (msec) : 20=0.55%, 50=99.45% 00:34:26.857 cpu : usr=98.76%, sys=0.85%, ctx=14, majf=0, minf=9 00:34:26.857 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:26.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.857 filename1: (groupid=0, jobs=1): err= 0: pid=1994783: Fri Dec 6 11:35:58 2024 00:34:26.857 read: IOPS=561, BW=2245KiB/s (2299kB/s)(21.9MiB/10005msec) 00:34:26.857 slat (nsec): min=4360, max=56507, avg=21747.82, stdev=7069.41 00:34:26.857 clat (usec): min=17185, max=48425, avg=28298.20, stdev=1252.43 00:34:26.857 lat (usec): min=17194, max=48438, avg=28319.95, stdev=1252.04 00:34:26.857 clat percentiles (usec): 00:34:26.857 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:34:26.857 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.857 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.857 | 99.00th=[28967], 99.50th=[30016], 99.90th=[48497], 99.95th=[48497], 00:34:26.857 | 99.99th=[48497] 00:34:26.857 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2243.37, stdev=78.31, samples=19 00:34:26.857 iops : min= 512, max= 576, avg=560.84, stdev=19.58, samples=19 00:34:26.857 lat (msec) : 20=0.28%, 50=99.72% 00:34:26.857 cpu : usr=98.57%, sys=1.05%, ctx=13, majf=0, minf=9 00:34:26.857 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:26.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.857 filename1: (groupid=0, jobs=1): err= 0: pid=1994784: Fri Dec 6 11:35:58 2024 00:34:26.857 read: IOPS=585, BW=2340KiB/s (2396kB/s)(22.9MiB/10010msec) 00:34:26.857 slat (nsec): min=6787, max=80911, avg=13108.16, stdev=8313.87 00:34:26.857 clat (usec): min=1092, max=30154, avg=27235.11, stdev=5150.90 00:34:26.857 lat (usec): min=1101, max=30176, avg=27248.21, stdev=5151.15 00:34:26.857 clat percentiles (usec): 00:34:26.857 | 1.00th=[ 1254], 5.00th=[21627], 10.00th=[27919], 20.00th=[28181], 00:34:26.857 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:26.857 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:34:26.857 | 99.00th=[29230], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:34:26.857 | 99.99th=[30278] 00:34:26.857 bw ( KiB/s): min= 2176, max= 3960, per=4.33%, avg=2344.00, stdev=396.28, samples=19 00:34:26.857 iops : min= 544, max= 990, avg=586.00, stdev=99.07, samples=19 00:34:26.857 lat (msec) : 2=3.01%, 4=0.27%, 10=0.55%, 20=0.82%, 50=95.36% 00:34:26.857 cpu : usr=98.59%, sys=1.02%, ctx=16, majf=0, minf=9 00:34:26.857 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=51.0%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:26.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 issued rwts: total=5856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.857 filename1: (groupid=0, jobs=1): err= 0: pid=1994785: Fri Dec 6 11:35:58 2024 00:34:26.857 read: IOPS=561, BW=2245KiB/s (2299kB/s)(21.9MiB/10006msec) 00:34:26.857 slat (nsec): min=4561, max=82869, avg=30345.52, stdev=17464.42 00:34:26.857 clat (usec): min=14906, max=71383, avg=28190.75, stdev=1967.83 00:34:26.857 lat (usec): min=14922, max=71396, avg=28221.10, stdev=1968.04 00:34:26.857 clat percentiles (usec): 00:34:26.857 | 1.00th=[27132], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:26.857 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.857 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.857 | 99.00th=[29230], 99.50th=[30016], 99.90th=[56886], 99.95th=[57410], 00:34:26.857 | 99.99th=[71828] 00:34:26.857 bw ( KiB/s): min= 2052, max= 2304, per=4.13%, avg=2236.84, stdev=77.78, samples=19 00:34:26.857 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:34:26.857 lat (msec) : 20=0.75%, 50=98.97%, 100=0.28% 00:34:26.857 cpu : usr=98.70%, sys=0.91%, ctx=13, majf=0, minf=9 00:34:26.857 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.857 filename1: (groupid=0, jobs=1): err= 0: pid=1994786: Fri Dec 6 11:35:58 2024 00:34:26.857 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10013msec) 00:34:26.857 slat (nsec): min=4355, max=64206, avg=21082.75, stdev=7398.33 00:34:26.857 clat (usec): min=17167, max=39186, avg=28267.09, stdev=878.35 00:34:26.857 lat (usec): min=17187, max=39195, avg=28288.17, stdev=877.94 00:34:26.857 clat percentiles (usec): 00:34:26.857 | 1.00th=[26870], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:26.857 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:34:26.857 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:34:26.857 | 99.00th=[28967], 99.50th=[29492], 99.90th=[39060], 99.95th=[39060], 00:34:26.857 | 99.99th=[39060] 00:34:26.857 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2250.11, stdev=64.93, samples=19 00:34:26.857 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:34:26.857 lat (msec) : 20=0.44%, 50=99.56% 00:34:26.857 cpu : usr=98.68%, sys=0.93%, ctx=14, majf=0, minf=9 00:34:26.857 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.857 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.857 filename1: (groupid=0, jobs=1): err= 0: pid=1994787: Fri Dec 6 11:35:58 2024 00:34:26.857 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10021msec) 00:34:26.857 slat (nsec): min=6753, max=83563, avg=16483.55, stdev=12423.29 00:34:26.857 clat (usec): min=9473, max=30115, avg=28177.06, stdev=1656.88 00:34:26.857 lat (usec): min=9488, max=30130, avg=28193.55, stdev=1654.73 00:34:26.857 clat percentiles (usec): 00:34:26.857 | 1.00th=[20317], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:26.857 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:26.857 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28705], 00:34:26.857 | 99.00th=[29230], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:34:26.857 | 99.99th=[30016] 00:34:26.857 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2259.20, stdev=75.15, samples=20 00:34:26.857 iops : min= 544, max= 608, avg=564.80, stdev=18.79, samples=20 00:34:26.857 lat (msec) : 10=0.25%, 20=0.60%, 50=99.15% 00:34:26.857 cpu : usr=98.49%, sys=1.11%, ctx=14, majf=0, minf=9 00:34:26.858 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.858 filename1: (groupid=0, jobs=1): err= 0: pid=1994788: Fri Dec 6 11:35:58 2024 00:34:26.858 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10021msec) 00:34:26.858 slat (usec): min=7, max=114, avg=33.48, stdev=17.43 00:34:26.858 clat (usec): min=9770, max=30106, avg=28053.32, stdev=1639.90 00:34:26.858 lat (usec): min=9789, max=30135, avg=28086.81, stdev=1639.53 00:34:26.858 clat percentiles (usec): 00:34:26.858 | 1.00th=[20317], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:34:26.858 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.858 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.858 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:34:26.858 | 99.99th=[30016] 00:34:26.858 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2259.20, stdev=75.15, samples=20 00:34:26.858 iops : min= 544, max= 608, avg=564.80, stdev=18.79, samples=20 00:34:26.858 lat (msec) : 10=0.19%, 20=0.65%, 50=99.15% 00:34:26.858 cpu : usr=98.66%, sys=0.94%, ctx=14, majf=0, minf=9 00:34:26.858 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:26.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.858 filename2: (groupid=0, jobs=1): err= 0: pid=1994789: Fri Dec 6 11:35:58 2024 00:34:26.858 read: IOPS=561, BW=2245KiB/s (2299kB/s)(21.9MiB/10006msec) 00:34:26.858 slat (nsec): min=4247, max=84911, avg=30107.31, stdev=17542.71 00:34:26.858 clat (usec): min=14941, max=57383, avg=28195.76, stdev=1820.67 00:34:26.858 lat (usec): min=14953, max=57397, avg=28225.87, stdev=1820.68 00:34:26.858 clat percentiles (usec): 00:34:26.858 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:26.858 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.858 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.858 | 99.00th=[28967], 99.50th=[29754], 99.90th=[57410], 99.95th=[57410], 00:34:26.858 | 99.99th=[57410] 00:34:26.858 bw ( KiB/s): min= 2052, max= 2304, per=4.13%, avg=2236.84, stdev=77.78, samples=19 00:34:26.858 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:34:26.858 lat (msec) : 20=0.61%, 50=99.11%, 100=0.28% 00:34:26.858 cpu : usr=98.82%, sys=0.79%, ctx=14, majf=0, minf=9 00:34:26.858 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.858 filename2: (groupid=0, jobs=1): err= 0: pid=1994790: Fri Dec 6 11:35:58 2024 00:34:26.858 read: IOPS=565, BW=2262KiB/s (2317kB/s)(22.1MiB/10007msec) 00:34:26.858 slat (nsec): min=6704, max=81756, avg=20342.29, stdev=15743.23 00:34:26.858 clat (usec): min=6619, max=57271, avg=28147.40, stdev=3190.56 00:34:26.858 lat (usec): min=6626, max=57285, avg=28167.74, stdev=3189.37 00:34:26.858 clat percentiles (usec): 00:34:26.858 | 1.00th=[22676], 5.00th=[22938], 10.00th=[23987], 20.00th=[27919], 00:34:26.858 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:26.858 | 70.00th=[28443], 80.00th=[28443], 90.00th=[32113], 95.00th=[33424], 00:34:26.858 | 99.00th=[34341], 99.50th=[40109], 99.90th=[57410], 99.95th=[57410], 00:34:26.858 | 99.99th=[57410] 00:34:26.858 bw ( KiB/s): min= 2052, max= 2336, per=4.17%, avg=2254.53, stdev=67.40, samples=19 00:34:26.858 iops : min= 513, max= 584, avg=563.63, stdev=16.85, samples=19 00:34:26.858 lat (msec) : 10=0.28%, 20=0.28%, 50=99.15%, 100=0.28% 00:34:26.858 cpu : usr=98.52%, sys=1.10%, ctx=12, majf=0, minf=9 00:34:26.858 IO depths : 1=2.1%, 2=4.4%, 4=10.5%, 8=70.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:34:26.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 complete : 0=0.0%, 4=90.7%, 8=6.1%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 issued rwts: total=5660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.858 filename2: (groupid=0, jobs=1): err= 0: pid=1994791: Fri Dec 6 11:35:58 2024 00:34:26.858 read: IOPS=563, BW=2253KiB/s (2307kB/s)(22.0MiB/10001msec) 00:34:26.858 slat (nsec): min=6319, max=80658, avg=34693.42, stdev=18356.25 00:34:26.858 clat (usec): min=15855, max=30123, avg=28070.97, stdev=885.35 00:34:26.858 lat (usec): min=15866, max=30139, avg=28105.66, stdev=888.28 00:34:26.858 clat percentiles (usec): 00:34:26.858 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:34:26.858 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:34:26.858 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.858 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:34:26.858 | 99.99th=[30016] 00:34:26.858 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2250.11, stdev=64.93, samples=19 00:34:26.858 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:34:26.858 lat (msec) : 20=0.48%, 50=99.52% 00:34:26.858 cpu : usr=98.65%, sys=0.97%, ctx=14, majf=0, minf=9 00:34:26.858 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.858 filename2: (groupid=0, jobs=1): err= 0: pid=1994792: Fri Dec 6 11:35:58 2024 00:34:26.858 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10021msec) 00:34:26.858 slat (nsec): min=6957, max=78740, avg=26167.43, stdev=15490.99 00:34:26.858 clat (usec): min=9608, max=36497, avg=28114.53, stdev=1668.95 00:34:26.858 lat (usec): min=9624, max=36513, avg=28140.70, stdev=1667.74 00:34:26.858 clat percentiles (usec): 00:34:26.858 | 1.00th=[20055], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:26.858 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:34:26.858 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:34:26.858 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[34866], 00:34:26.858 | 99.99th=[36439] 00:34:26.858 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2259.20, stdev=75.15, samples=20 00:34:26.858 iops : min= 544, max= 608, avg=564.80, stdev=18.79, samples=20 00:34:26.858 lat (msec) : 10=0.23%, 20=0.62%, 50=99.15% 00:34:26.858 cpu : usr=98.83%, sys=0.77%, ctx=13, majf=0, minf=9 00:34:26.858 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.858 filename2: (groupid=0, jobs=1): err= 0: pid=1994793: Fri Dec 6 11:35:58 2024 00:34:26.858 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10013msec) 00:34:26.858 slat (nsec): min=7381, max=64076, avg=22028.38, stdev=6789.98 00:34:26.858 clat (usec): min=17193, max=43944, avg=28246.89, stdev=846.76 00:34:26.858 lat (usec): min=17210, max=43960, avg=28268.91, stdev=847.08 00:34:26.858 clat percentiles (usec): 00:34:26.858 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:26.858 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:34:26.858 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.858 | 99.00th=[28967], 99.50th=[29492], 99.90th=[38536], 99.95th=[39060], 00:34:26.858 | 99.99th=[43779] 00:34:26.858 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2250.11, stdev=62.24, samples=19 00:34:26.858 iops : min= 544, max= 576, avg=562.53, stdev=15.56, samples=19 00:34:26.858 lat (msec) : 20=0.39%, 50=99.61% 00:34:26.858 cpu : usr=98.67%, sys=0.94%, ctx=13, majf=0, minf=9 00:34:26.858 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.858 filename2: (groupid=0, jobs=1): err= 0: pid=1994794: Fri Dec 6 11:35:58 2024 00:34:26.858 read: IOPS=562, BW=2249KiB/s (2303kB/s)(22.0MiB/10016msec) 00:34:26.858 slat (nsec): min=4200, max=55981, avg=17328.21, stdev=7530.47 00:34:26.858 clat (usec): min=17215, max=38993, avg=28311.20, stdev=1073.01 00:34:26.858 lat (usec): min=17230, max=39003, avg=28328.53, stdev=1072.51 00:34:26.858 clat percentiles (usec): 00:34:26.858 | 1.00th=[26608], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:34:26.858 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:26.858 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:34:26.858 | 99.00th=[29492], 99.50th=[30540], 99.90th=[38536], 99.95th=[39060], 00:34:26.858 | 99.99th=[39060] 00:34:26.858 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2243.37, stdev=65.66, samples=19 00:34:26.858 iops : min= 544, max= 576, avg=560.84, stdev=16.42, samples=19 00:34:26.858 lat (msec) : 20=0.62%, 50=99.38% 00:34:26.858 cpu : usr=98.59%, sys=1.01%, ctx=20, majf=0, minf=9 00:34:26.858 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:26.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.858 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.858 filename2: (groupid=0, jobs=1): err= 0: pid=1994795: Fri Dec 6 11:35:58 2024 00:34:26.858 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10021msec) 00:34:26.858 slat (nsec): min=7056, max=80601, avg=32777.58, stdev=17308.44 00:34:26.858 clat (usec): min=9503, max=30042, avg=28056.05, stdev=1630.86 00:34:26.858 lat (usec): min=9528, max=30056, avg=28088.82, stdev=1630.91 00:34:26.858 clat percentiles (usec): 00:34:26.858 | 1.00th=[20317], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:34:26.858 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.859 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:26.859 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:34:26.859 | 99.99th=[30016] 00:34:26.859 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2259.20, stdev=75.15, samples=20 00:34:26.859 iops : min= 544, max= 608, avg=564.80, stdev=18.79, samples=20 00:34:26.859 lat (msec) : 10=0.26%, 20=0.58%, 50=99.15% 00:34:26.859 cpu : usr=98.60%, sys=1.00%, ctx=14, majf=0, minf=9 00:34:26.859 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:26.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.859 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.859 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.859 filename2: (groupid=0, jobs=1): err= 0: pid=1994796: Fri Dec 6 11:35:58 2024 00:34:26.859 read: IOPS=561, BW=2244KiB/s (2298kB/s)(21.9MiB/10007msec) 00:34:26.859 slat (nsec): min=6578, max=59979, avg=24232.16, stdev=10381.59 00:34:26.859 clat (usec): min=7298, max=57129, avg=28303.80, stdev=2075.62 00:34:26.859 lat (usec): min=7307, max=57142, avg=28328.03, stdev=2075.51 00:34:26.859 clat percentiles (usec): 00:34:26.859 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:26.859 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:26.859 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:34:26.859 | 99.00th=[29230], 99.50th=[38011], 99.90th=[56886], 99.95th=[56886], 00:34:26.859 | 99.99th=[56886] 00:34:26.859 bw ( KiB/s): min= 2052, max= 2304, per=4.13%, avg=2236.84, stdev=77.78, samples=19 00:34:26.859 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:34:26.859 lat (msec) : 10=0.25%, 20=0.32%, 50=99.14%, 100=0.29% 00:34:26.859 cpu : usr=98.81%, sys=0.80%, ctx=62, majf=0, minf=9 00:34:26.859 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:26.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.859 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.859 issued rwts: total=5614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:26.859 00:34:26.859 Run status group 0 (all jobs): 00:34:26.859 READ: bw=52.8MiB/s (55.4MB/s), 2244KiB/s-2340KiB/s (2298kB/s-2396kB/s), io=530MiB (555MB), run=10001-10021msec 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 bdev_null0 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 [2024-12-06 11:35:58.909860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 bdev_null1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:26.859 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:26.859 { 00:34:26.859 "params": { 00:34:26.859 "name": "Nvme$subsystem", 00:34:26.859 "trtype": "$TEST_TRANSPORT", 00:34:26.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.859 "adrfam": "ipv4", 00:34:26.859 "trsvcid": "$NVMF_PORT", 00:34:26.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.860 "hdgst": ${hdgst:-false}, 00:34:26.860 "ddgst": ${ddgst:-false} 00:34:26.860 }, 00:34:26.860 "method": "bdev_nvme_attach_controller" 00:34:26.860 } 00:34:26.860 EOF 00:34:26.860 )") 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:26.860 { 00:34:26.860 "params": { 00:34:26.860 "name": "Nvme$subsystem", 00:34:26.860 "trtype": "$TEST_TRANSPORT", 00:34:26.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.860 "adrfam": "ipv4", 00:34:26.860 "trsvcid": "$NVMF_PORT", 00:34:26.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.860 "hdgst": ${hdgst:-false}, 00:34:26.860 "ddgst": ${ddgst:-false} 00:34:26.860 }, 00:34:26.860 "method": "bdev_nvme_attach_controller" 00:34:26.860 } 00:34:26.860 EOF 00:34:26.860 )") 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:26.860 "params": { 00:34:26.860 "name": "Nvme0", 00:34:26.860 "trtype": "tcp", 00:34:26.860 "traddr": "10.0.0.2", 00:34:26.860 "adrfam": "ipv4", 00:34:26.860 "trsvcid": "4420", 00:34:26.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:26.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:26.860 "hdgst": false, 00:34:26.860 "ddgst": false 00:34:26.860 }, 00:34:26.860 "method": "bdev_nvme_attach_controller" 00:34:26.860 },{ 00:34:26.860 "params": { 00:34:26.860 "name": "Nvme1", 00:34:26.860 "trtype": "tcp", 00:34:26.860 "traddr": "10.0.0.2", 00:34:26.860 "adrfam": "ipv4", 00:34:26.860 "trsvcid": "4420", 00:34:26.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:26.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:26.860 "hdgst": false, 00:34:26.860 "ddgst": false 00:34:26.860 }, 00:34:26.860 "method": "bdev_nvme_attach_controller" 00:34:26.860 }' 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:26.860 11:35:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:26.860 11:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:26.860 11:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:26.860 11:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:26.860 11:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.860 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:26.860 ... 00:34:26.860 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:26.860 ... 00:34:26.860 fio-3.35 00:34:26.860 Starting 4 threads 00:34:33.420 00:34:33.420 filename0: (groupid=0, jobs=1): err= 0: pid=1997033: Fri Dec 6 11:36:05 2024 00:34:33.420 read: IOPS=3162, BW=24.7MiB/s (25.9MB/s)(124MiB/5001msec) 00:34:33.420 slat (usec): min=5, max=131, avg= 9.87, stdev= 4.92 00:34:33.420 clat (usec): min=758, max=5113, avg=2499.38, stdev=410.45 00:34:33.420 lat (usec): min=780, max=5124, avg=2509.26, stdev=410.71 00:34:33.420 clat percentiles (usec): 00:34:33.420 | 1.00th=[ 1549], 5.00th=[ 1909], 10.00th=[ 2040], 20.00th=[ 2180], 00:34:33.420 | 30.00th=[ 2278], 40.00th=[ 2343], 50.00th=[ 2474], 60.00th=[ 2573], 00:34:33.420 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 2966], 95.00th=[ 3163], 00:34:33.420 | 99.00th=[ 3720], 99.50th=[ 3982], 99.90th=[ 4686], 99.95th=[ 4948], 00:34:33.420 | 99.99th=[ 5014] 00:34:33.420 bw ( KiB/s): min=23984, max=27328, per=27.36%, avg=25224.89, stdev=1046.60, samples=9 00:34:33.420 iops : min= 2998, max= 3416, avg=3153.11, stdev=130.82, samples=9 00:34:33.420 lat (usec) : 1000=0.01% 00:34:33.420 lat (msec) : 2=7.84%, 4=91.70%, 10=0.44% 00:34:33.420 cpu : usr=96.22%, sys=3.40%, ctx=11, majf=0, minf=9 00:34:33.420 IO depths : 1=0.4%, 2=9.8%, 4=60.6%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.420 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.420 issued rwts: total=15814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:33.420 filename0: (groupid=0, jobs=1): err= 0: pid=1997034: Fri Dec 6 11:36:05 2024 00:34:33.420 read: IOPS=2857, BW=22.3MiB/s (23.4MB/s)(112MiB/5002msec) 00:34:33.420 slat (nsec): min=5594, max=71921, avg=9735.99, stdev=4748.70 00:34:33.420 clat (usec): min=677, max=5205, avg=2771.81, stdev=447.12 00:34:33.420 lat (usec): min=688, max=5215, avg=2781.55, stdev=446.89 00:34:33.420 clat percentiles (usec): 00:34:33.420 | 1.00th=[ 1827], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2442], 00:34:33.420 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2802], 00:34:33.420 | 70.00th=[ 2933], 80.00th=[ 3097], 90.00th=[ 3326], 95.00th=[ 3523], 00:34:33.420 | 99.00th=[ 4113], 99.50th=[ 4359], 99.90th=[ 4817], 99.95th=[ 4883], 00:34:33.420 | 99.99th=[ 5145] 00:34:33.420 bw ( KiB/s): min=21584, max=24320, per=24.73%, avg=22801.56, stdev=751.27, samples=9 00:34:33.420 iops : min= 2698, max= 3040, avg=2850.11, stdev=93.87, samples=9 00:34:33.420 lat (usec) : 750=0.01%, 1000=0.03% 00:34:33.420 lat (msec) : 2=2.78%, 4=95.79%, 10=1.39% 00:34:33.420 cpu : usr=96.88%, sys=2.76%, ctx=12, majf=0, minf=9 00:34:33.420 IO depths : 1=0.1%, 2=4.6%, 4=65.6%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.420 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.420 issued rwts: total=14292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:33.420 filename1: (groupid=0, jobs=1): err= 0: pid=1997035: Fri Dec 6 11:36:05 2024 00:34:33.420 read: IOPS=2665, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:34:33.420 slat (nsec): min=5550, max=71934, avg=9683.78, stdev=4920.70 00:34:33.420 clat (usec): min=614, max=5434, avg=2972.20, stdev=453.25 00:34:33.420 lat (usec): min=627, max=5457, avg=2981.89, stdev=452.79 00:34:33.420 clat percentiles (usec): 00:34:33.420 | 1.00th=[ 2073], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2671], 00:34:33.420 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2999], 00:34:33.420 | 70.00th=[ 3097], 80.00th=[ 3294], 90.00th=[ 3523], 95.00th=[ 3851], 00:34:33.420 | 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 5014], 99.95th=[ 5080], 00:34:33.420 | 99.99th=[ 5407] 00:34:33.420 bw ( KiB/s): min=20352, max=22112, per=23.00%, avg=21210.00, stdev=511.68, samples=9 00:34:33.420 iops : min= 2544, max= 2764, avg=2651.22, stdev=63.95, samples=9 00:34:33.420 lat (usec) : 750=0.02%, 1000=0.05% 00:34:33.420 lat (msec) : 2=0.66%, 4=95.97%, 10=3.31% 00:34:33.420 cpu : usr=92.94%, sys=4.82%, ctx=424, majf=0, minf=9 00:34:33.420 IO depths : 1=0.2%, 2=2.0%, 4=71.6%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.420 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.420 issued rwts: total=13332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:33.420 filename1: (groupid=0, jobs=1): err= 0: pid=1997036: Fri Dec 6 11:36:05 2024 00:34:33.420 read: IOPS=2842, BW=22.2MiB/s (23.3MB/s)(111MiB/5003msec) 00:34:33.420 slat (nsec): min=5544, max=72294, avg=9634.06, stdev=4834.58 00:34:33.420 clat (usec): min=654, max=5086, avg=2784.88, stdev=439.62 00:34:33.420 lat (usec): min=660, max=5098, avg=2794.51, stdev=439.32 00:34:33.420 clat percentiles (usec): 00:34:33.420 | 1.00th=[ 1729], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2474], 00:34:33.420 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2835], 00:34:33.420 | 70.00th=[ 2933], 80.00th=[ 3064], 90.00th=[ 3326], 95.00th=[ 3523], 00:34:33.420 | 99.00th=[ 4178], 99.50th=[ 4424], 99.90th=[ 4817], 99.95th=[ 5014], 00:34:33.420 | 99.99th=[ 5080] 00:34:33.420 bw ( KiB/s): min=21168, max=24656, per=24.54%, avg=22625.78, stdev=1010.55, samples=9 00:34:33.420 iops : min= 2646, max= 3082, avg=2828.22, stdev=126.32, samples=9 00:34:33.420 lat (usec) : 750=0.01% 00:34:33.420 lat (msec) : 2=2.74%, 4=95.84%, 10=1.41% 00:34:33.420 cpu : usr=96.12%, sys=3.52%, ctx=7, majf=0, minf=9 00:34:33.420 IO depths : 1=0.5%, 2=4.2%, 4=67.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.420 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.420 issued rwts: total=14223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:33.420 00:34:33.420 Run status group 0 (all jobs): 00:34:33.420 READ: bw=90.0MiB/s (94.4MB/s), 20.8MiB/s-24.7MiB/s (21.8MB/s-25.9MB/s), io=450MiB (472MB), run=5001-5003msec 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.420 00:34:33.420 real 0m24.713s 00:34:33.420 user 4m59.994s 00:34:33.420 sys 0m4.979s 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:33.420 11:36:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.420 ************************************ 00:34:33.420 END TEST fio_dif_rand_params 00:34:33.420 ************************************ 00:34:33.420 11:36:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:33.421 11:36:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:33.421 11:36:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:33.421 11:36:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:33.421 ************************************ 00:34:33.421 START TEST fio_dif_digest 00:34:33.421 ************************************ 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:33.421 bdev_null0 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:33.421 [2024-12-06 11:36:05.637824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.421 { 00:34:33.421 "params": { 00:34:33.421 "name": "Nvme$subsystem", 00:34:33.421 "trtype": "$TEST_TRANSPORT", 00:34:33.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.421 "adrfam": "ipv4", 00:34:33.421 "trsvcid": "$NVMF_PORT", 00:34:33.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.421 "hdgst": ${hdgst:-false}, 00:34:33.421 "ddgst": ${ddgst:-false} 00:34:33.421 }, 00:34:33.421 "method": "bdev_nvme_attach_controller" 00:34:33.421 } 00:34:33.421 EOF 00:34:33.421 )") 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:33.421 "params": { 00:34:33.421 "name": "Nvme0", 00:34:33.421 "trtype": "tcp", 00:34:33.421 "traddr": "10.0.0.2", 00:34:33.421 "adrfam": "ipv4", 00:34:33.421 "trsvcid": "4420", 00:34:33.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:33.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:33.421 "hdgst": true, 00:34:33.421 "ddgst": true 00:34:33.421 }, 00:34:33.421 "method": "bdev_nvme_attach_controller" 00:34:33.421 }' 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:33.421 11:36:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.421 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:33.421 ... 00:34:33.421 fio-3.35 00:34:33.421 Starting 3 threads 00:34:45.629 00:34:45.629 filename0: (groupid=0, jobs=1): err= 0: pid=1998345: Fri Dec 6 11:36:16 2024 00:34:45.629 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(398MiB/10047msec) 00:34:45.629 slat (usec): min=5, max=215, avg=18.32, stdev= 8.06 00:34:45.629 clat (usec): min=6771, max=54046, avg=9422.85, stdev=1265.33 00:34:45.629 lat (usec): min=6782, max=54058, avg=9441.17, stdev=1265.60 00:34:45.629 clat percentiles (usec): 00:34:45.629 | 1.00th=[ 7832], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:34:45.629 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:34:45.629 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:34:45.630 | 99.00th=[11207], 99.50th=[11600], 99.90th=[12518], 99.95th=[47449], 00:34:45.630 | 99.99th=[54264] 00:34:45.630 bw ( KiB/s): min=37376, max=43264, per=35.75%, avg=40768.00, stdev=1231.59, samples=20 00:34:45.630 iops : min= 292, max= 338, avg=318.50, stdev= 9.62, samples=20 00:34:45.630 lat (msec) : 10=81.46%, 20=18.48%, 50=0.03%, 100=0.03% 00:34:45.630 cpu : usr=96.24%, sys=3.42%, ctx=21, majf=0, minf=74 00:34:45.630 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.630 issued rwts: total=3187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.630 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:45.630 filename0: (groupid=0, jobs=1): err= 0: pid=1998346: Fri Dec 6 11:36:16 2024 00:34:45.630 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(364MiB/10044msec) 00:34:45.630 slat (nsec): min=5963, max=47168, avg=16504.07, stdev=7328.17 00:34:45.630 clat (usec): min=7859, max=48521, avg=10318.93, stdev=1264.00 00:34:45.630 lat (usec): min=7886, max=48546, avg=10335.44, stdev=1263.31 00:34:45.630 clat percentiles (usec): 00:34:45.630 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:34:45.630 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:34:45.630 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:34:45.630 | 99.00th=[12649], 99.50th=[12911], 99.90th=[14877], 99.95th=[45876], 00:34:45.630 | 99.99th=[48497] 00:34:45.630 bw ( KiB/s): min=33536, max=39168, per=32.66%, avg=37235.20, stdev=1332.74, samples=20 00:34:45.630 iops : min= 262, max= 306, avg=290.90, stdev=10.41, samples=20 00:34:45.630 lat (msec) : 10=37.55%, 20=62.38%, 50=0.07% 00:34:45.630 cpu : usr=96.50%, sys=3.19%, ctx=19, majf=0, minf=54 00:34:45.630 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.630 issued rwts: total=2911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.630 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:45.630 filename0: (groupid=0, jobs=1): err= 0: pid=1998347: Fri Dec 6 11:36:16 2024 00:34:45.630 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(357MiB/10045msec) 00:34:45.630 slat (nsec): min=5928, max=63662, avg=15908.69, stdev=7476.75 00:34:45.630 clat (usec): min=7766, max=50395, avg=10532.93, stdev=1319.11 00:34:45.630 lat (usec): min=7792, max=50407, avg=10548.84, stdev=1318.23 00:34:45.630 clat percentiles (usec): 00:34:45.630 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:34:45.630 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:34:45.630 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11600], 95.00th=[12125], 00:34:45.630 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13960], 99.95th=[47449], 00:34:45.630 | 99.99th=[50594] 00:34:45.630 bw ( KiB/s): min=32256, max=37888, per=31.99%, avg=36480.00, stdev=1319.80, samples=20 00:34:45.630 iops : min= 252, max= 296, avg=285.00, stdev=10.31, samples=20 00:34:45.630 lat (msec) : 10=28.09%, 20=71.84%, 50=0.04%, 100=0.04% 00:34:45.630 cpu : usr=96.49%, sys=3.19%, ctx=16, majf=0, minf=74 00:34:45.630 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.630 issued rwts: total=2852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.630 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:45.630 00:34:45.630 Run status group 0 (all jobs): 00:34:45.630 READ: bw=111MiB/s (117MB/s), 35.5MiB/s-39.7MiB/s (37.2MB/s-41.6MB/s), io=1119MiB (1173MB), run=10044-10047msec 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.630 00:34:45.630 real 0m11.279s 00:34:45.630 user 0m38.265s 00:34:45.630 sys 0m1.307s 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.630 11:36:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:45.630 ************************************ 00:34:45.630 END TEST fio_dif_digest 00:34:45.630 ************************************ 00:34:45.630 11:36:16 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:45.630 11:36:16 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:45.630 rmmod nvme_tcp 00:34:45.630 rmmod nvme_fabrics 00:34:45.630 rmmod nvme_keyring 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1988918 ']' 00:34:45.630 11:36:16 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1988918 00:34:45.630 11:36:16 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1988918 ']' 00:34:45.630 11:36:16 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1988918 00:34:45.630 11:36:16 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:45.630 11:36:16 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:45.630 11:36:16 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1988918 00:34:45.630 11:36:17 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:45.630 11:36:17 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:45.630 11:36:17 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1988918' 00:34:45.630 killing process with pid 1988918 00:34:45.630 11:36:17 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1988918 00:34:45.630 11:36:17 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1988918 00:34:45.630 11:36:17 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:45.630 11:36:17 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:47.079 Waiting for block devices as requested 00:34:47.079 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:34:47.379 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:47.379 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:47.379 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:47.652 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:47.652 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:47.652 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:47.652 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:47.911 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:47.911 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:47.911 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:47.911 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:48.170 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:48.170 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:48.170 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:48.429 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:48.429 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:48.429 11:36:21 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:48.429 11:36:21 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:48.429 11:36:21 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:48.429 11:36:21 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:48.429 11:36:21 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:48.429 11:36:21 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:48.429 11:36:21 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:48.429 11:36:21 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:48.429 11:36:21 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.429 11:36:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:48.429 11:36:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:50.969 11:36:23 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:50.969 00:34:50.969 real 1m15.610s 00:34:50.969 user 7m26.900s 00:34:50.969 sys 0m20.109s 00:34:50.969 11:36:23 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:50.969 11:36:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:50.969 ************************************ 00:34:50.969 END TEST nvmf_dif 00:34:50.969 ************************************ 00:34:50.969 11:36:23 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:50.969 11:36:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:50.969 11:36:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:50.969 11:36:23 -- common/autotest_common.sh@10 -- # set +x 00:34:50.969 ************************************ 00:34:50.969 START TEST nvmf_abort_qd_sizes 00:34:50.969 ************************************ 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:50.969 * Looking for test storage... 00:34:50.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:50.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.969 --rc genhtml_branch_coverage=1 00:34:50.969 --rc genhtml_function_coverage=1 00:34:50.969 --rc genhtml_legend=1 00:34:50.969 --rc geninfo_all_blocks=1 00:34:50.969 --rc geninfo_unexecuted_blocks=1 00:34:50.969 00:34:50.969 ' 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:50.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.969 --rc genhtml_branch_coverage=1 00:34:50.969 --rc genhtml_function_coverage=1 00:34:50.969 --rc genhtml_legend=1 00:34:50.969 --rc geninfo_all_blocks=1 00:34:50.969 --rc geninfo_unexecuted_blocks=1 00:34:50.969 00:34:50.969 ' 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:50.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.969 --rc genhtml_branch_coverage=1 00:34:50.969 --rc genhtml_function_coverage=1 00:34:50.969 --rc genhtml_legend=1 00:34:50.969 --rc geninfo_all_blocks=1 00:34:50.969 --rc geninfo_unexecuted_blocks=1 00:34:50.969 00:34:50.969 ' 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:50.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.969 --rc genhtml_branch_coverage=1 00:34:50.969 --rc genhtml_function_coverage=1 00:34:50.969 --rc genhtml_legend=1 00:34:50.969 --rc geninfo_all_blocks=1 00:34:50.969 --rc geninfo_unexecuted_blocks=1 00:34:50.969 00:34:50.969 ' 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:50.969 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:50.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:50.970 11:36:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:57.537 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:57.538 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:57.538 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:57.538 Found net devices under 0000:af:00.0: cvl_0_0 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:57.538 Found net devices under 0000:af:00.1: cvl_0_1 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:57.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:57.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:34:57.538 00:34:57.538 --- 10.0.0.2 ping statistics --- 00:34:57.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.538 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:57.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:57.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:34:57.538 00:34:57.538 --- 10.0.0.1 ping statistics --- 00:34:57.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.538 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:57.538 11:36:29 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:59.447 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:59.447 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:59.447 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:59.447 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:59.447 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:59.447 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:59.447 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:59.447 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:59.706 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:59.706 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:59.707 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:59.707 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:59.707 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:59.707 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:59.707 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:59.707 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:00.640 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2007002 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:00.640 11:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2007002 00:35:00.641 11:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2007002 ']' 00:35:00.641 11:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.641 11:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:00.641 11:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.641 11:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:00.641 11:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:00.641 [2024-12-06 11:36:33.564194] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:35:00.641 [2024-12-06 11:36:33.564234] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:00.899 [2024-12-06 11:36:33.639139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:00.899 [2024-12-06 11:36:33.681553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:00.899 [2024-12-06 11:36:33.681583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:00.899 [2024-12-06 11:36:33.681590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:00.899 [2024-12-06 11:36:33.681595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:00.899 [2024-12-06 11:36:33.681600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:00.899 [2024-12-06 11:36:33.683130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.899 [2024-12-06 11:36:33.683241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:00.899 [2024-12-06 11:36:33.683350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.899 [2024-12-06 11:36:33.683351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:01.464 11:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:01.464 11:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:01.464 11:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:01.464 11:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:01.464 11:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:86:00.0 ]] 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:86:00.0 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:01.722 11:36:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.722 ************************************ 00:35:01.722 START TEST spdk_target_abort 00:35:01.722 ************************************ 00:35:01.722 11:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:01.722 11:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:01.722 11:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:35:01.722 11:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.722 11:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.007 spdk_targetn1 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.007 [2024-12-06 11:36:37.303510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.007 [2024-12-06 11:36:37.359838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:05.007 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:05.008 11:36:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:08.294 Initializing NVMe Controllers 00:35:08.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:08.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:08.295 Initialization complete. Launching workers. 00:35:08.295 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16729, failed: 0 00:35:08.295 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1449, failed to submit 15280 00:35:08.295 success 717, unsuccessful 732, failed 0 00:35:08.295 11:36:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:08.295 11:36:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:11.577 Initializing NVMe Controllers 00:35:11.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:11.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:11.577 Initialization complete. Launching workers. 00:35:11.577 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8933, failed: 0 00:35:11.577 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1262, failed to submit 7671 00:35:11.577 success 285, unsuccessful 977, failed 0 00:35:11.577 11:36:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:11.577 11:36:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:14.861 Initializing NVMe Controllers 00:35:14.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:14.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:14.861 Initialization complete. Launching workers. 00:35:14.861 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40789, failed: 0 00:35:14.861 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2816, failed to submit 37973 00:35:14.861 success 584, unsuccessful 2232, failed 0 00:35:14.861 11:36:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:14.861 11:36:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.861 11:36:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.861 11:36:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.861 11:36:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:14.861 11:36:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.861 11:36:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2007002 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2007002 ']' 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2007002 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2007002 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2007002' 00:35:15.798 killing process with pid 2007002 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2007002 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2007002 00:35:15.798 00:35:15.798 real 0m14.209s 00:35:15.798 user 0m56.412s 00:35:15.798 sys 0m2.743s 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:15.798 11:36:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:15.798 ************************************ 00:35:15.798 END TEST spdk_target_abort 00:35:15.798 ************************************ 00:35:15.798 11:36:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:15.798 11:36:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:15.798 11:36:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:15.798 11:36:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:16.058 ************************************ 00:35:16.058 START TEST kernel_target_abort 00:35:16.058 ************************************ 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:16.058 11:36:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:18.595 Waiting for block devices as requested 00:35:18.595 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:35:18.855 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:18.855 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:19.116 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:19.116 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:19.116 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:19.116 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:19.375 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:19.375 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:19.375 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:19.634 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:19.634 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:19.634 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:19.894 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:19.894 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:19.894 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:19.894 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:20.155 No valid GPT data, bailing 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:20.155 11:36:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:20.155 00:35:20.155 Discovery Log Number of Records 2, Generation counter 2 00:35:20.155 =====Discovery Log Entry 0====== 00:35:20.155 trtype: tcp 00:35:20.155 adrfam: ipv4 00:35:20.155 subtype: current discovery subsystem 00:35:20.155 treq: not specified, sq flow control disable supported 00:35:20.155 portid: 1 00:35:20.155 trsvcid: 4420 00:35:20.155 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:20.155 traddr: 10.0.0.1 00:35:20.155 eflags: none 00:35:20.155 sectype: none 00:35:20.155 =====Discovery Log Entry 1====== 00:35:20.155 trtype: tcp 00:35:20.155 adrfam: ipv4 00:35:20.155 subtype: nvme subsystem 00:35:20.155 treq: not specified, sq flow control disable supported 00:35:20.155 portid: 1 00:35:20.155 trsvcid: 4420 00:35:20.155 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:20.155 traddr: 10.0.0.1 00:35:20.155 eflags: none 00:35:20.155 sectype: none 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:20.155 11:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:23.446 Initializing NVMe Controllers 00:35:23.446 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:23.446 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:23.446 Initialization complete. Launching workers. 00:35:23.446 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85308, failed: 0 00:35:23.446 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 85308, failed to submit 0 00:35:23.446 success 0, unsuccessful 85308, failed 0 00:35:23.446 11:36:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:23.446 11:36:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:26.738 Initializing NVMe Controllers 00:35:26.738 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:26.738 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:26.738 Initialization complete. Launching workers. 00:35:26.738 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 155611, failed: 0 00:35:26.738 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29858, failed to submit 125753 00:35:26.738 success 0, unsuccessful 29858, failed 0 00:35:26.738 11:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:26.738 11:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:30.045 Initializing NVMe Controllers 00:35:30.045 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:30.045 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:30.045 Initialization complete. Launching workers. 00:35:30.045 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138526, failed: 0 00:35:30.046 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34682, failed to submit 103844 00:35:30.046 success 0, unsuccessful 34682, failed 0 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:30.046 11:37:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:32.581 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:32.581 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:33.517 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:35:33.517 00:35:33.517 real 0m17.641s 00:35:33.517 user 0m8.550s 00:35:33.517 sys 0m5.342s 00:35:33.517 11:37:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.517 11:37:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:33.517 ************************************ 00:35:33.517 END TEST kernel_target_abort 00:35:33.517 ************************************ 00:35:33.517 11:37:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:33.517 11:37:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:33.517 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:33.517 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:33.517 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:33.517 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:33.517 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:33.517 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:33.517 rmmod nvme_tcp 00:35:33.517 rmmod nvme_fabrics 00:35:33.775 rmmod nvme_keyring 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2007002 ']' 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2007002 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2007002 ']' 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2007002 00:35:33.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2007002) - No such process 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2007002 is not found' 00:35:33.775 Process with pid 2007002 is not found 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:33.775 11:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:36.311 Waiting for block devices as requested 00:35:36.311 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:35:36.569 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:36.569 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:36.827 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:36.827 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:36.827 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:37.085 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:37.085 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:37.085 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:37.085 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:37.344 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:37.344 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:37.344 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:37.603 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:37.603 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:37.603 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:37.603 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:37.862 11:37:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.769 11:37:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:39.769 00:35:39.769 real 0m49.252s 00:35:39.769 user 1m9.547s 00:35:39.769 sys 0m16.798s 00:35:39.769 11:37:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:39.769 11:37:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:39.769 ************************************ 00:35:39.769 END TEST nvmf_abort_qd_sizes 00:35:39.769 ************************************ 00:35:40.029 11:37:12 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:40.029 11:37:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:40.029 11:37:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:40.029 11:37:12 -- common/autotest_common.sh@10 -- # set +x 00:35:40.029 ************************************ 00:35:40.029 START TEST keyring_file 00:35:40.029 ************************************ 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:40.029 * Looking for test storage... 00:35:40.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:40.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.029 --rc genhtml_branch_coverage=1 00:35:40.029 --rc genhtml_function_coverage=1 00:35:40.029 --rc genhtml_legend=1 00:35:40.029 --rc geninfo_all_blocks=1 00:35:40.029 --rc geninfo_unexecuted_blocks=1 00:35:40.029 00:35:40.029 ' 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:40.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.029 --rc genhtml_branch_coverage=1 00:35:40.029 --rc genhtml_function_coverage=1 00:35:40.029 --rc genhtml_legend=1 00:35:40.029 --rc geninfo_all_blocks=1 00:35:40.029 --rc geninfo_unexecuted_blocks=1 00:35:40.029 00:35:40.029 ' 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:40.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.029 --rc genhtml_branch_coverage=1 00:35:40.029 --rc genhtml_function_coverage=1 00:35:40.029 --rc genhtml_legend=1 00:35:40.029 --rc geninfo_all_blocks=1 00:35:40.029 --rc geninfo_unexecuted_blocks=1 00:35:40.029 00:35:40.029 ' 00:35:40.029 11:37:12 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:40.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.029 --rc genhtml_branch_coverage=1 00:35:40.029 --rc genhtml_function_coverage=1 00:35:40.029 --rc genhtml_legend=1 00:35:40.029 --rc geninfo_all_blocks=1 00:35:40.029 --rc geninfo_unexecuted_blocks=1 00:35:40.029 00:35:40.029 ' 00:35:40.029 11:37:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:40.029 11:37:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:40.029 11:37:12 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:40.029 11:37:12 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:40.289 11:37:12 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.289 11:37:12 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.289 11:37:12 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.289 11:37:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.289 11:37:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.289 11:37:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.289 11:37:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:40.289 11:37:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:40.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:40.289 11:37:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:40.289 11:37:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:40.289 11:37:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:40.289 11:37:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:40.289 11:37:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:40.289 11:37:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:40.289 11:37:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:40.289 11:37:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:40.289 11:37:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:40.289 11:37:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:40.289 11:37:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:40.289 11:37:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:40.289 11:37:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iY3ylMO1Zv 00:35:40.289 11:37:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:40.289 11:37:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iY3ylMO1Zv 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iY3ylMO1Zv 00:35:40.289 11:37:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.iY3ylMO1Zv 00:35:40.289 11:37:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CTsVOrU6he 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:40.289 11:37:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:40.289 11:37:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:40.289 11:37:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:40.289 11:37:13 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:40.289 11:37:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:40.289 11:37:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CTsVOrU6he 00:35:40.289 11:37:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CTsVOrU6he 00:35:40.289 11:37:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.CTsVOrU6he 00:35:40.289 11:37:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=2016445 00:35:40.289 11:37:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2016445 00:35:40.290 11:37:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:40.290 11:37:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2016445 ']' 00:35:40.290 11:37:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.290 11:37:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:40.290 11:37:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.290 11:37:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:40.290 11:37:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:40.290 [2024-12-06 11:37:13.131145] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:35:40.290 [2024-12-06 11:37:13.131190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016445 ] 00:35:40.290 [2024-12-06 11:37:13.202314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.549 [2024-12-06 11:37:13.241465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.117 11:37:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.117 11:37:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:41.118 11:37:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:41.118 [2024-12-06 11:37:13.938439] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.118 null0 00:35:41.118 [2024-12-06 11:37:13.970488] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:41.118 [2024-12-06 11:37:13.970719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.118 11:37:13 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.118 11:37:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:41.118 [2024-12-06 11:37:13.998551] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:41.118 request: 00:35:41.118 { 00:35:41.118 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:41.118 "secure_channel": false, 00:35:41.118 "listen_address": { 00:35:41.118 "trtype": "tcp", 00:35:41.118 "traddr": "127.0.0.1", 00:35:41.118 "trsvcid": "4420" 00:35:41.118 }, 00:35:41.118 "method": "nvmf_subsystem_add_listener", 00:35:41.118 "req_id": 1 00:35:41.118 } 00:35:41.118 Got JSON-RPC error response 00:35:41.118 response: 00:35:41.118 { 00:35:41.118 "code": -32602, 00:35:41.118 "message": "Invalid parameters" 00:35:41.118 } 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:41.118 11:37:14 keyring_file -- keyring/file.sh@47 -- # bperfpid=2016508 00:35:41.118 11:37:14 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2016508 /var/tmp/bperf.sock 00:35:41.118 11:37:14 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2016508 ']' 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:41.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.118 11:37:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:41.118 [2024-12-06 11:37:14.052375] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:35:41.118 [2024-12-06 11:37:14.052424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016508 ] 00:35:41.377 [2024-12-06 11:37:14.121764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.377 [2024-12-06 11:37:14.159386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.377 11:37:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.377 11:37:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:41.377 11:37:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iY3ylMO1Zv 00:35:41.377 11:37:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iY3ylMO1Zv 00:35:41.636 11:37:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CTsVOrU6he 00:35:41.636 11:37:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CTsVOrU6he 00:35:41.896 11:37:14 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:41.896 11:37:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:41.896 11:37:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.896 11:37:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.896 11:37:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.896 11:37:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.iY3ylMO1Zv == \/\t\m\p\/\t\m\p\.\i\Y\3\y\l\M\O\1\Z\v ]] 00:35:41.896 11:37:14 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:41.896 11:37:14 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:41.896 11:37:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.896 11:37:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.896 11:37:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:42.155 11:37:14 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.CTsVOrU6he == \/\t\m\p\/\t\m\p\.\C\T\s\V\O\r\U\6\h\e ]] 00:35:42.155 11:37:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:42.155 11:37:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.155 11:37:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:42.155 11:37:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.155 11:37:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.155 11:37:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.413 11:37:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:42.413 11:37:15 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:42.413 11:37:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:42.413 11:37:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.413 11:37:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.413 11:37:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:42.413 11:37:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.413 11:37:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:42.413 11:37:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:42.413 11:37:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:42.670 [2024-12-06 11:37:15.481644] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:42.670 nvme0n1 00:35:42.670 11:37:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:42.670 11:37:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:42.670 11:37:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.670 11:37:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.670 11:37:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.670 11:37:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.928 11:37:15 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:42.928 11:37:15 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:42.928 11:37:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:42.928 11:37:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.928 11:37:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.928 11:37:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.928 11:37:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:43.186 11:37:15 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:43.186 11:37:15 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:43.186 Running I/O for 1 seconds... 00:35:44.183 20852.00 IOPS, 81.45 MiB/s 00:35:44.183 Latency(us) 00:35:44.183 [2024-12-06T10:37:17.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.183 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:44.183 nvme0n1 : 1.00 20895.06 81.62 0.00 0.00 6113.99 2740.60 12451.84 00:35:44.183 [2024-12-06T10:37:17.121Z] =================================================================================================================== 00:35:44.183 [2024-12-06T10:37:17.121Z] Total : 20895.06 81.62 0.00 0.00 6113.99 2740.60 12451.84 00:35:44.183 { 00:35:44.183 "results": [ 00:35:44.183 { 00:35:44.183 "job": "nvme0n1", 00:35:44.183 "core_mask": "0x2", 00:35:44.183 "workload": "randrw", 00:35:44.183 "percentage": 50, 00:35:44.183 "status": "finished", 00:35:44.183 "queue_depth": 128, 00:35:44.183 "io_size": 4096, 00:35:44.183 "runtime": 1.004113, 00:35:44.183 "iops": 20895.05862387998, 00:35:44.183 "mibps": 81.62132274953117, 00:35:44.183 "io_failed": 0, 00:35:44.183 "io_timeout": 0, 00:35:44.183 "avg_latency_us": 6113.989418304873, 00:35:44.183 "min_latency_us": 2740.5963636363635, 00:35:44.183 "max_latency_us": 12451.84 00:35:44.183 } 00:35:44.183 ], 00:35:44.183 "core_count": 1 00:35:44.183 } 00:35:44.183 11:37:17 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:44.183 11:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:44.496 11:37:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:44.496 11:37:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:44.496 11:37:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.496 11:37:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.496 11:37:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.496 11:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.755 11:37:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:44.755 11:37:17 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:44.755 11:37:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:44.755 11:37:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.755 11:37:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.755 11:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.755 11:37:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:44.755 11:37:17 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:44.755 11:37:17 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:44.755 11:37:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:44.755 11:37:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:44.755 11:37:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:44.755 11:37:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:44.755 11:37:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:44.755 11:37:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:44.755 11:37:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:44.755 11:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:45.015 [2024-12-06 11:37:17.771905] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:45.015 [2024-12-06 11:37:17.772561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1288330 (107): Transport endpoint is not connected 00:35:45.015 [2024-12-06 11:37:17.773557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1288330 (9): Bad file descriptor 00:35:45.015 [2024-12-06 11:37:17.774558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:45.015 [2024-12-06 11:37:17.774568] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:45.015 [2024-12-06 11:37:17.774574] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:45.015 [2024-12-06 11:37:17.774583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:45.015 request: 00:35:45.015 { 00:35:45.015 "name": "nvme0", 00:35:45.015 "trtype": "tcp", 00:35:45.015 "traddr": "127.0.0.1", 00:35:45.015 "adrfam": "ipv4", 00:35:45.015 "trsvcid": "4420", 00:35:45.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:45.015 "prchk_reftag": false, 00:35:45.015 "prchk_guard": false, 00:35:45.015 "hdgst": false, 00:35:45.015 "ddgst": false, 00:35:45.015 "psk": "key1", 00:35:45.015 "allow_unrecognized_csi": false, 00:35:45.015 "method": "bdev_nvme_attach_controller", 00:35:45.015 "req_id": 1 00:35:45.015 } 00:35:45.015 Got JSON-RPC error response 00:35:45.015 response: 00:35:45.015 { 00:35:45.015 "code": -5, 00:35:45.015 "message": "Input/output error" 00:35:45.015 } 00:35:45.015 11:37:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:45.015 11:37:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:45.015 11:37:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:45.015 11:37:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:45.015 11:37:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:45.015 11:37:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.015 11:37:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:45.015 11:37:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.015 11:37:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.015 11:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.275 11:37:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:45.275 11:37:17 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:45.275 11:37:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:45.275 11:37:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.275 11:37:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.275 11:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.275 11:37:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:45.275 11:37:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:45.275 11:37:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:45.275 11:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:45.534 11:37:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:45.534 11:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:45.793 11:37:18 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:45.793 11:37:18 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:45.793 11:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.793 11:37:18 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:45.793 11:37:18 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.iY3ylMO1Zv 00:35:45.793 11:37:18 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.iY3ylMO1Zv 00:35:45.793 11:37:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:45.793 11:37:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.iY3ylMO1Zv 00:35:45.793 11:37:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:45.793 11:37:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.793 11:37:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:45.793 11:37:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.793 11:37:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iY3ylMO1Zv 00:35:45.793 11:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iY3ylMO1Zv 00:35:46.052 [2024-12-06 11:37:18.842366] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iY3ylMO1Zv': 0100660 00:35:46.052 [2024-12-06 11:37:18.842391] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:46.052 request: 00:35:46.052 { 00:35:46.052 "name": "key0", 00:35:46.052 "path": "/tmp/tmp.iY3ylMO1Zv", 00:35:46.052 "method": "keyring_file_add_key", 00:35:46.052 "req_id": 1 00:35:46.052 } 00:35:46.052 Got JSON-RPC error response 00:35:46.052 response: 00:35:46.052 { 00:35:46.052 "code": -1, 00:35:46.052 "message": "Operation not permitted" 00:35:46.052 } 00:35:46.052 11:37:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:46.052 11:37:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:46.052 11:37:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:46.052 11:37:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:46.052 11:37:18 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.iY3ylMO1Zv 00:35:46.052 11:37:18 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iY3ylMO1Zv 00:35:46.052 11:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iY3ylMO1Zv 00:35:46.310 11:37:19 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.iY3ylMO1Zv 00:35:46.310 11:37:19 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:46.310 11:37:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.310 11:37:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:46.310 11:37:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.310 11:37:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.310 11:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.569 11:37:19 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:46.569 11:37:19 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:46.569 11:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:46.569 [2024-12-06 11:37:19.443941] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.iY3ylMO1Zv': No such file or directory 00:35:46.569 [2024-12-06 11:37:19.443962] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:46.569 [2024-12-06 11:37:19.443977] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:46.569 [2024-12-06 11:37:19.443983] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:46.569 [2024-12-06 11:37:19.443990] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:46.569 [2024-12-06 11:37:19.443995] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:46.569 request: 00:35:46.569 { 00:35:46.569 "name": "nvme0", 00:35:46.569 "trtype": "tcp", 00:35:46.569 "traddr": "127.0.0.1", 00:35:46.569 "adrfam": "ipv4", 00:35:46.569 "trsvcid": "4420", 00:35:46.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:46.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:46.569 "prchk_reftag": false, 00:35:46.569 "prchk_guard": false, 00:35:46.569 "hdgst": false, 00:35:46.569 "ddgst": false, 00:35:46.569 "psk": "key0", 00:35:46.569 "allow_unrecognized_csi": false, 00:35:46.569 "method": "bdev_nvme_attach_controller", 00:35:46.569 "req_id": 1 00:35:46.569 } 00:35:46.569 Got JSON-RPC error response 00:35:46.569 response: 00:35:46.569 { 00:35:46.569 "code": -19, 00:35:46.569 "message": "No such device" 00:35:46.569 } 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:46.569 11:37:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:46.569 11:37:19 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:46.570 11:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:46.828 11:37:19 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:46.828 11:37:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:46.828 11:37:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:46.828 11:37:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:46.829 11:37:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:46.829 11:37:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:46.829 11:37:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pKhFJUW9O4 00:35:46.829 11:37:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:46.829 11:37:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:46.829 11:37:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:46.829 11:37:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:46.829 11:37:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:46.829 11:37:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:46.829 11:37:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:46.829 11:37:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pKhFJUW9O4 00:35:46.829 11:37:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pKhFJUW9O4 00:35:46.829 11:37:19 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.pKhFJUW9O4 00:35:46.829 11:37:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pKhFJUW9O4 00:35:46.829 11:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pKhFJUW9O4 00:35:47.087 11:37:19 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:47.087 11:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:47.346 nvme0n1 00:35:47.346 11:37:20 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:47.346 11:37:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.346 11:37:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.346 11:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.346 11:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.346 11:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.605 11:37:20 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:47.605 11:37:20 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:47.605 11:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:47.605 11:37:20 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:47.605 11:37:20 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:47.605 11:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.605 11:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.605 11:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.864 11:37:20 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:47.864 11:37:20 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:47.864 11:37:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.864 11:37:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.864 11:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.864 11:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.864 11:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.124 11:37:20 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:48.124 11:37:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:48.124 11:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:48.124 11:37:21 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:48.124 11:37:21 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:48.124 11:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.383 11:37:21 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:48.383 11:37:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pKhFJUW9O4 00:35:48.383 11:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pKhFJUW9O4 00:35:48.641 11:37:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CTsVOrU6he 00:35:48.641 11:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CTsVOrU6he 00:35:48.900 11:37:21 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:48.900 11:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:48.900 nvme0n1 00:35:48.900 11:37:21 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:48.900 11:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:49.158 11:37:22 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:49.158 "subsystems": [ 00:35:49.158 { 00:35:49.158 "subsystem": "keyring", 00:35:49.158 "config": [ 00:35:49.158 { 00:35:49.158 "method": "keyring_file_add_key", 00:35:49.158 "params": { 00:35:49.158 "name": "key0", 00:35:49.158 "path": "/tmp/tmp.pKhFJUW9O4" 00:35:49.158 } 00:35:49.158 }, 00:35:49.158 { 00:35:49.158 "method": "keyring_file_add_key", 00:35:49.158 "params": { 00:35:49.158 "name": "key1", 00:35:49.158 "path": "/tmp/tmp.CTsVOrU6he" 00:35:49.158 } 00:35:49.158 } 00:35:49.158 ] 00:35:49.158 }, 00:35:49.158 { 00:35:49.158 "subsystem": "iobuf", 00:35:49.158 "config": [ 00:35:49.158 { 00:35:49.158 "method": "iobuf_set_options", 00:35:49.158 "params": { 00:35:49.158 "small_pool_count": 8192, 00:35:49.158 "large_pool_count": 1024, 00:35:49.158 "small_bufsize": 8192, 00:35:49.158 "large_bufsize": 135168, 00:35:49.158 "enable_numa": false 00:35:49.158 } 00:35:49.158 } 00:35:49.158 ] 00:35:49.158 }, 00:35:49.158 { 00:35:49.158 "subsystem": "sock", 00:35:49.158 "config": [ 00:35:49.158 { 00:35:49.158 "method": "sock_set_default_impl", 00:35:49.158 "params": { 00:35:49.158 "impl_name": "posix" 00:35:49.158 } 00:35:49.158 }, 00:35:49.158 { 00:35:49.158 "method": "sock_impl_set_options", 00:35:49.158 "params": { 00:35:49.158 "impl_name": "ssl", 00:35:49.158 "recv_buf_size": 4096, 00:35:49.158 "send_buf_size": 4096, 00:35:49.158 "enable_recv_pipe": true, 00:35:49.158 "enable_quickack": false, 00:35:49.158 "enable_placement_id": 0, 00:35:49.158 "enable_zerocopy_send_server": true, 00:35:49.158 "enable_zerocopy_send_client": false, 00:35:49.158 "zerocopy_threshold": 0, 00:35:49.158 "tls_version": 0, 00:35:49.158 "enable_ktls": false 00:35:49.158 } 00:35:49.158 }, 00:35:49.158 { 00:35:49.158 "method": "sock_impl_set_options", 00:35:49.158 "params": { 00:35:49.158 "impl_name": "posix", 00:35:49.158 "recv_buf_size": 2097152, 00:35:49.158 "send_buf_size": 2097152, 00:35:49.158 "enable_recv_pipe": true, 00:35:49.158 "enable_quickack": false, 00:35:49.158 "enable_placement_id": 0, 00:35:49.158 "enable_zerocopy_send_server": true, 00:35:49.158 "enable_zerocopy_send_client": false, 00:35:49.158 "zerocopy_threshold": 0, 00:35:49.158 "tls_version": 0, 00:35:49.158 "enable_ktls": false 00:35:49.158 } 00:35:49.158 } 00:35:49.158 ] 00:35:49.158 }, 00:35:49.158 { 00:35:49.158 "subsystem": "vmd", 00:35:49.158 "config": [] 00:35:49.158 }, 00:35:49.158 { 00:35:49.158 "subsystem": "accel", 00:35:49.158 "config": [ 00:35:49.158 { 00:35:49.158 "method": "accel_set_options", 00:35:49.158 "params": { 00:35:49.158 "small_cache_size": 128, 00:35:49.158 "large_cache_size": 16, 00:35:49.158 "task_count": 2048, 00:35:49.158 "sequence_count": 2048, 00:35:49.158 "buf_count": 2048 00:35:49.158 } 00:35:49.158 } 00:35:49.158 ] 00:35:49.158 }, 00:35:49.158 { 00:35:49.158 "subsystem": "bdev", 00:35:49.158 "config": [ 00:35:49.158 { 00:35:49.158 "method": "bdev_set_options", 00:35:49.158 "params": { 00:35:49.158 "bdev_io_pool_size": 65535, 00:35:49.158 "bdev_io_cache_size": 256, 00:35:49.158 "bdev_auto_examine": true, 00:35:49.158 "iobuf_small_cache_size": 128, 00:35:49.158 "iobuf_large_cache_size": 16 00:35:49.158 } 00:35:49.158 }, 00:35:49.158 { 00:35:49.158 "method": "bdev_raid_set_options", 00:35:49.158 "params": { 00:35:49.158 "process_window_size_kb": 1024, 00:35:49.159 "process_max_bandwidth_mb_sec": 0 00:35:49.159 } 00:35:49.159 }, 00:35:49.159 { 00:35:49.159 "method": "bdev_iscsi_set_options", 00:35:49.159 "params": { 00:35:49.159 "timeout_sec": 30 00:35:49.159 } 00:35:49.159 }, 00:35:49.159 { 00:35:49.159 "method": "bdev_nvme_set_options", 00:35:49.159 "params": { 00:35:49.159 "action_on_timeout": "none", 00:35:49.159 "timeout_us": 0, 00:35:49.159 "timeout_admin_us": 0, 00:35:49.159 "keep_alive_timeout_ms": 10000, 00:35:49.159 "arbitration_burst": 0, 00:35:49.159 "low_priority_weight": 0, 00:35:49.159 "medium_priority_weight": 0, 00:35:49.159 "high_priority_weight": 0, 00:35:49.159 "nvme_adminq_poll_period_us": 10000, 00:35:49.159 "nvme_ioq_poll_period_us": 0, 00:35:49.159 "io_queue_requests": 512, 00:35:49.159 "delay_cmd_submit": true, 00:35:49.159 "transport_retry_count": 4, 00:35:49.159 "bdev_retry_count": 3, 00:35:49.159 "transport_ack_timeout": 0, 00:35:49.159 "ctrlr_loss_timeout_sec": 0, 00:35:49.159 "reconnect_delay_sec": 0, 00:35:49.159 "fast_io_fail_timeout_sec": 0, 00:35:49.159 "disable_auto_failback": false, 00:35:49.159 "generate_uuids": false, 00:35:49.159 "transport_tos": 0, 00:35:49.159 "nvme_error_stat": false, 00:35:49.159 "rdma_srq_size": 0, 00:35:49.159 "io_path_stat": false, 00:35:49.159 "allow_accel_sequence": false, 00:35:49.159 "rdma_max_cq_size": 0, 00:35:49.159 "rdma_cm_event_timeout_ms": 0, 00:35:49.159 "dhchap_digests": [ 00:35:49.159 "sha256", 00:35:49.159 "sha384", 00:35:49.159 "sha512" 00:35:49.159 ], 00:35:49.159 "dhchap_dhgroups": [ 00:35:49.159 "null", 00:35:49.159 "ffdhe2048", 00:35:49.159 "ffdhe3072", 00:35:49.159 "ffdhe4096", 00:35:49.159 "ffdhe6144", 00:35:49.159 "ffdhe8192" 00:35:49.159 ] 00:35:49.159 } 00:35:49.159 }, 00:35:49.159 { 00:35:49.159 "method": "bdev_nvme_attach_controller", 00:35:49.159 "params": { 00:35:49.159 "name": "nvme0", 00:35:49.159 "trtype": "TCP", 00:35:49.159 "adrfam": "IPv4", 00:35:49.159 "traddr": "127.0.0.1", 00:35:49.159 "trsvcid": "4420", 00:35:49.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.159 "prchk_reftag": false, 00:35:49.159 "prchk_guard": false, 00:35:49.159 "ctrlr_loss_timeout_sec": 0, 00:35:49.159 "reconnect_delay_sec": 0, 00:35:49.159 "fast_io_fail_timeout_sec": 0, 00:35:49.159 "psk": "key0", 00:35:49.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.159 "hdgst": false, 00:35:49.159 "ddgst": false, 00:35:49.159 "multipath": "multipath" 00:35:49.159 } 00:35:49.159 }, 00:35:49.159 { 00:35:49.159 "method": "bdev_nvme_set_hotplug", 00:35:49.159 "params": { 00:35:49.159 "period_us": 100000, 00:35:49.159 "enable": false 00:35:49.159 } 00:35:49.159 }, 00:35:49.159 { 00:35:49.159 "method": "bdev_wait_for_examine" 00:35:49.159 } 00:35:49.159 ] 00:35:49.159 }, 00:35:49.159 { 00:35:49.159 "subsystem": "nbd", 00:35:49.159 "config": [] 00:35:49.159 } 00:35:49.159 ] 00:35:49.159 }' 00:35:49.159 11:37:22 keyring_file -- keyring/file.sh@115 -- # killprocess 2016508 00:35:49.159 11:37:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2016508 ']' 00:35:49.159 11:37:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2016508 00:35:49.159 11:37:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:49.159 11:37:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.159 11:37:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2016508 00:35:49.418 11:37:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:49.418 11:37:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:49.418 11:37:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2016508' 00:35:49.418 killing process with pid 2016508 00:35:49.418 11:37:22 keyring_file -- common/autotest_common.sh@973 -- # kill 2016508 00:35:49.418 Received shutdown signal, test time was about 1.000000 seconds 00:35:49.418 00:35:49.418 Latency(us) 00:35:49.418 [2024-12-06T10:37:22.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.418 [2024-12-06T10:37:22.356Z] =================================================================================================================== 00:35:49.418 [2024-12-06T10:37:22.356Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:49.418 11:37:22 keyring_file -- common/autotest_common.sh@978 -- # wait 2016508 00:35:49.418 11:37:22 keyring_file -- keyring/file.sh@118 -- # bperfpid=2018111 00:35:49.418 11:37:22 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2018111 /var/tmp/bperf.sock 00:35:49.418 11:37:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2018111 ']' 00:35:49.418 11:37:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:49.418 11:37:22 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:49.418 11:37:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:49.418 11:37:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:49.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:49.418 11:37:22 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:49.418 "subsystems": [ 00:35:49.418 { 00:35:49.418 "subsystem": "keyring", 00:35:49.418 "config": [ 00:35:49.418 { 00:35:49.418 "method": "keyring_file_add_key", 00:35:49.418 "params": { 00:35:49.418 "name": "key0", 00:35:49.418 "path": "/tmp/tmp.pKhFJUW9O4" 00:35:49.418 } 00:35:49.418 }, 00:35:49.418 { 00:35:49.418 "method": "keyring_file_add_key", 00:35:49.418 "params": { 00:35:49.418 "name": "key1", 00:35:49.418 "path": "/tmp/tmp.CTsVOrU6he" 00:35:49.419 } 00:35:49.419 } 00:35:49.419 ] 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "subsystem": "iobuf", 00:35:49.419 "config": [ 00:35:49.419 { 00:35:49.419 "method": "iobuf_set_options", 00:35:49.419 "params": { 00:35:49.419 "small_pool_count": 8192, 00:35:49.419 "large_pool_count": 1024, 00:35:49.419 "small_bufsize": 8192, 00:35:49.419 "large_bufsize": 135168, 00:35:49.419 "enable_numa": false 00:35:49.419 } 00:35:49.419 } 00:35:49.419 ] 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "subsystem": "sock", 00:35:49.419 "config": [ 00:35:49.419 { 00:35:49.419 "method": "sock_set_default_impl", 00:35:49.419 "params": { 00:35:49.419 "impl_name": "posix" 00:35:49.419 } 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "method": "sock_impl_set_options", 00:35:49.419 "params": { 00:35:49.419 "impl_name": "ssl", 00:35:49.419 "recv_buf_size": 4096, 00:35:49.419 "send_buf_size": 4096, 00:35:49.419 "enable_recv_pipe": true, 00:35:49.419 "enable_quickack": false, 00:35:49.419 "enable_placement_id": 0, 00:35:49.419 "enable_zerocopy_send_server": true, 00:35:49.419 "enable_zerocopy_send_client": false, 00:35:49.419 "zerocopy_threshold": 0, 00:35:49.419 "tls_version": 0, 00:35:49.419 "enable_ktls": false 00:35:49.419 } 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "method": "sock_impl_set_options", 00:35:49.419 "params": { 00:35:49.419 "impl_name": "posix", 00:35:49.419 "recv_buf_size": 2097152, 00:35:49.419 "send_buf_size": 2097152, 00:35:49.419 "enable_recv_pipe": true, 00:35:49.419 "enable_quickack": false, 00:35:49.419 "enable_placement_id": 0, 00:35:49.419 "enable_zerocopy_send_server": true, 00:35:49.419 "enable_zerocopy_send_client": false, 00:35:49.419 "zerocopy_threshold": 0, 00:35:49.419 "tls_version": 0, 00:35:49.419 "enable_ktls": false 00:35:49.419 } 00:35:49.419 } 00:35:49.419 ] 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "subsystem": "vmd", 00:35:49.419 "config": [] 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "subsystem": "accel", 00:35:49.419 "config": [ 00:35:49.419 { 00:35:49.419 "method": "accel_set_options", 00:35:49.419 "params": { 00:35:49.419 "small_cache_size": 128, 00:35:49.419 "large_cache_size": 16, 00:35:49.419 "task_count": 2048, 00:35:49.419 "sequence_count": 2048, 00:35:49.419 "buf_count": 2048 00:35:49.419 } 00:35:49.419 } 00:35:49.419 ] 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "subsystem": "bdev", 00:35:49.419 "config": [ 00:35:49.419 { 00:35:49.419 "method": "bdev_set_options", 00:35:49.419 "params": { 00:35:49.419 "bdev_io_pool_size": 65535, 00:35:49.419 "bdev_io_cache_size": 256, 00:35:49.419 "bdev_auto_examine": true, 00:35:49.419 "iobuf_small_cache_size": 128, 00:35:49.419 "iobuf_large_cache_size": 16 00:35:49.419 } 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "method": "bdev_raid_set_options", 00:35:49.419 "params": { 00:35:49.419 "process_window_size_kb": 1024, 00:35:49.419 "process_max_bandwidth_mb_sec": 0 00:35:49.419 } 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "method": "bdev_iscsi_set_options", 00:35:49.419 "params": { 00:35:49.419 "timeout_sec": 30 00:35:49.419 } 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "method": "bdev_nvme_set_options", 00:35:49.419 "params": { 00:35:49.419 "action_on_timeout": "none", 00:35:49.419 "timeout_us": 0, 00:35:49.419 "timeout_admin_us": 0, 00:35:49.419 "keep_alive_timeout_ms": 10000, 00:35:49.419 "arbitration_burst": 0, 00:35:49.419 "low_priority_weight": 0, 00:35:49.419 "medium_priority_weight": 0, 00:35:49.419 "high_priority_weight": 0, 00:35:49.419 "nvme_adminq_poll_period_us": 10000, 00:35:49.419 "nvme_ioq_poll_period_us": 0, 00:35:49.419 "io_queue_requests": 512, 00:35:49.419 "delay_cmd_submit": true, 00:35:49.419 "transport_retry_count": 4, 00:35:49.419 "bdev_retry_count": 3, 00:35:49.419 "transport_ack_timeout": 0, 00:35:49.419 "ctrlr_loss_timeout_sec": 0, 00:35:49.419 "reconnect_delay_sec": 0, 00:35:49.419 "fast_io_fail_timeout_sec": 0, 00:35:49.419 "disable_auto_failback": false, 00:35:49.419 "generate_uuids": false, 00:35:49.419 "transport_tos": 0, 00:35:49.419 "nvme_error_stat": false, 00:35:49.419 "rdma_srq_size": 0, 00:35:49.419 "io_path_stat": false, 00:35:49.419 "allow_accel_sequence": false, 00:35:49.419 "rdma_max_cq_size": 0, 00:35:49.419 "rdma_cm_event_timeout_ms": 0, 00:35:49.419 "dhchap_digests": [ 00:35:49.419 "sha256", 00:35:49.419 "sha384", 00:35:49.419 "sha512" 00:35:49.419 ], 00:35:49.419 "dhchap_dhgroups": [ 00:35:49.419 "null", 00:35:49.419 "ffdhe2048", 00:35:49.419 "ffdhe3072", 00:35:49.419 "ffdhe4096", 00:35:49.419 "ffdhe6144", 00:35:49.419 "ffdhe8192" 00:35:49.419 ] 00:35:49.419 } 00:35:49.419 }, 00:35:49.419 { 00:35:49.419 "method": "bdev_nvme_attach_controller", 00:35:49.419 "params": { 00:35:49.419 "name": "nvme0", 00:35:49.419 "trtype": "TCP", 00:35:49.419 "adrfam": "IPv4", 00:35:49.419 "traddr": "127.0.0.1", 00:35:49.419 "trsvcid": "4420", 00:35:49.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.419 "prchk_reftag": false, 00:35:49.419 "prchk_guard": false, 00:35:49.419 "ctrlr_loss_timeout_sec": 0, 00:35:49.420 "reconnect_delay_sec": 0, 00:35:49.420 "fast_io_fail_timeout_sec": 0, 00:35:49.420 "psk": "key0", 00:35:49.420 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.420 "hdgst": false, 00:35:49.420 "ddgst": false, 00:35:49.420 "multipath": "multipath" 00:35:49.420 } 00:35:49.420 }, 00:35:49.420 { 00:35:49.420 "method": "bdev_nvme_set_hotplug", 00:35:49.420 "params": { 00:35:49.420 "period_us": 100000, 00:35:49.420 "enable": false 00:35:49.420 } 00:35:49.420 }, 00:35:49.420 { 00:35:49.420 "method": "bdev_wait_for_examine" 00:35:49.420 } 00:35:49.420 ] 00:35:49.420 }, 00:35:49.420 { 00:35:49.420 "subsystem": "nbd", 00:35:49.420 "config": [] 00:35:49.420 } 00:35:49.420 ] 00:35:49.420 }' 00:35:49.420 11:37:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:49.420 11:37:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:49.420 [2024-12-06 11:37:22.326435] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:35:49.420 [2024-12-06 11:37:22.326480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018111 ] 00:35:49.679 [2024-12-06 11:37:22.398277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.679 [2024-12-06 11:37:22.434620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.679 [2024-12-06 11:37:22.593852] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:50.243 11:37:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:50.243 11:37:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:50.243 11:37:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:50.243 11:37:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:50.243 11:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.501 11:37:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:50.501 11:37:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:50.501 11:37:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:50.501 11:37:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.501 11:37:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.501 11:37:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.501 11:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.759 11:37:23 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:50.759 11:37:23 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:50.759 11:37:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:50.759 11:37:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.759 11:37:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.759 11:37:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:50.759 11:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.017 11:37:23 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:51.017 11:37:23 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:51.017 11:37:23 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:51.017 11:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:51.017 11:37:23 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:51.017 11:37:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:51.017 11:37:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.pKhFJUW9O4 /tmp/tmp.CTsVOrU6he 00:35:51.017 11:37:23 keyring_file -- keyring/file.sh@20 -- # killprocess 2018111 00:35:51.017 11:37:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2018111 ']' 00:35:51.017 11:37:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2018111 00:35:51.017 11:37:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:51.017 11:37:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:51.017 11:37:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018111 00:35:51.275 11:37:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:51.275 11:37:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:51.275 11:37:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018111' 00:35:51.275 killing process with pid 2018111 00:35:51.275 11:37:23 keyring_file -- common/autotest_common.sh@973 -- # kill 2018111 00:35:51.275 Received shutdown signal, test time was about 1.000000 seconds 00:35:51.275 00:35:51.275 Latency(us) 00:35:51.275 [2024-12-06T10:37:24.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.275 [2024-12-06T10:37:24.213Z] =================================================================================================================== 00:35:51.275 [2024-12-06T10:37:24.213Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:51.275 11:37:23 keyring_file -- common/autotest_common.sh@978 -- # wait 2018111 00:35:51.275 11:37:24 keyring_file -- keyring/file.sh@21 -- # killprocess 2016445 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2016445 ']' 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2016445 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2016445 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2016445' 00:35:51.275 killing process with pid 2016445 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@973 -- # kill 2016445 00:35:51.275 11:37:24 keyring_file -- common/autotest_common.sh@978 -- # wait 2016445 00:35:51.844 00:35:51.844 real 0m11.707s 00:35:51.844 user 0m28.265s 00:35:51.844 sys 0m2.650s 00:35:51.844 11:37:24 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.844 11:37:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:51.844 ************************************ 00:35:51.844 END TEST keyring_file 00:35:51.844 ************************************ 00:35:51.844 11:37:24 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:51.844 11:37:24 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:51.844 11:37:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:51.844 11:37:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:51.844 11:37:24 -- common/autotest_common.sh@10 -- # set +x 00:35:51.844 ************************************ 00:35:51.844 START TEST keyring_linux 00:35:51.844 ************************************ 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:51.844 Joined session keyring: 15244449 00:35:51.844 * Looking for test storage... 00:35:51.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:51.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.844 --rc genhtml_branch_coverage=1 00:35:51.844 --rc genhtml_function_coverage=1 00:35:51.844 --rc genhtml_legend=1 00:35:51.844 --rc geninfo_all_blocks=1 00:35:51.844 --rc geninfo_unexecuted_blocks=1 00:35:51.844 00:35:51.844 ' 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:51.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.844 --rc genhtml_branch_coverage=1 00:35:51.844 --rc genhtml_function_coverage=1 00:35:51.844 --rc genhtml_legend=1 00:35:51.844 --rc geninfo_all_blocks=1 00:35:51.844 --rc geninfo_unexecuted_blocks=1 00:35:51.844 00:35:51.844 ' 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:51.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.844 --rc genhtml_branch_coverage=1 00:35:51.844 --rc genhtml_function_coverage=1 00:35:51.844 --rc genhtml_legend=1 00:35:51.844 --rc geninfo_all_blocks=1 00:35:51.844 --rc geninfo_unexecuted_blocks=1 00:35:51.844 00:35:51.844 ' 00:35:51.844 11:37:24 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:51.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.844 --rc genhtml_branch_coverage=1 00:35:51.844 --rc genhtml_function_coverage=1 00:35:51.844 --rc genhtml_legend=1 00:35:51.844 --rc geninfo_all_blocks=1 00:35:51.844 --rc geninfo_unexecuted_blocks=1 00:35:51.844 00:35:51.844 ' 00:35:51.844 11:37:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:51.844 11:37:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.844 11:37:24 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.844 11:37:24 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.845 11:37:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.845 11:37:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.845 11:37:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.845 11:37:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:51.845 11:37:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:51.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:51.845 11:37:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:51.845 11:37:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:51.845 11:37:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:51.845 11:37:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:51.845 11:37:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:51.845 11:37:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:51.845 11:37:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:51.845 11:37:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:51.845 11:37:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:51.845 11:37:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:51.845 11:37:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:51.845 11:37:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:51.845 11:37:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:51.845 11:37:24 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:52.104 /tmp/:spdk-test:key0 00:35:52.104 11:37:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:52.104 11:37:24 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:52.104 11:37:24 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:52.104 11:37:24 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:52.104 11:37:24 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:52.104 11:37:24 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:52.104 11:37:24 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:52.104 11:37:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:52.104 /tmp/:spdk-test:key1 00:35:52.104 11:37:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2018585 00:35:52.104 11:37:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2018585 00:35:52.104 11:37:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:52.104 11:37:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2018585 ']' 00:35:52.104 11:37:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.104 11:37:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.104 11:37:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.104 11:37:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.104 11:37:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:52.104 [2024-12-06 11:37:24.882934] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:35:52.104 [2024-12-06 11:37:24.882981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018585 ] 00:35:52.104 [2024-12-06 11:37:24.955977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.104 [2024-12-06 11:37:24.993224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:53.041 11:37:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:53.041 [2024-12-06 11:37:25.690806] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:53.041 null0 00:35:53.041 [2024-12-06 11:37:25.722856] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:53.041 [2024-12-06 11:37:25.723234] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.041 11:37:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:53.041 202436492 00:35:53.041 11:37:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:53.041 319592871 00:35:53.041 11:37:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2018845 00:35:53.041 11:37:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2018845 /var/tmp/bperf.sock 00:35:53.041 11:37:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2018845 ']' 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:53.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:53.041 [2024-12-06 11:37:25.794035] Starting SPDK v25.01-pre git sha1 50b04b06b / DPDK 24.03.0 initialization... 00:35:53.041 [2024-12-06 11:37:25.794082] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018845 ] 00:35:53.041 [2024-12-06 11:37:25.867243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.041 [2024-12-06 11:37:25.904551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:53.041 11:37:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:53.041 11:37:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:53.041 11:37:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:53.300 11:37:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:53.300 11:37:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:53.560 11:37:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:53.560 11:37:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:53.819 [2024-12-06 11:37:26.528771] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:53.819 nvme0n1 00:35:53.819 11:37:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:53.819 11:37:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:53.819 11:37:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:53.819 11:37:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:53.819 11:37:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:53.819 11:37:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.077 11:37:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:54.077 11:37:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:54.077 11:37:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:54.077 11:37:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:54.077 11:37:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:54.077 11:37:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:54.077 11:37:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.077 11:37:26 keyring_linux -- keyring/linux.sh@25 -- # sn=202436492 00:35:54.077 11:37:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:54.077 11:37:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:54.077 11:37:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 202436492 == \2\0\2\4\3\6\4\9\2 ]] 00:35:54.077 11:37:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 202436492 00:35:54.077 11:37:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:54.077 11:37:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:54.336 Running I/O for 1 seconds... 00:35:55.271 23492.00 IOPS, 91.77 MiB/s 00:35:55.271 Latency(us) 00:35:55.271 [2024-12-06T10:37:28.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.271 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:55.271 nvme0n1 : 1.01 23492.88 91.77 0.00 0.00 5432.13 4647.10 9770.82 00:35:55.271 [2024-12-06T10:37:28.209Z] =================================================================================================================== 00:35:55.271 [2024-12-06T10:37:28.209Z] Total : 23492.88 91.77 0.00 0.00 5432.13 4647.10 9770.82 00:35:55.271 { 00:35:55.271 "results": [ 00:35:55.271 { 00:35:55.271 "job": "nvme0n1", 00:35:55.271 "core_mask": "0x2", 00:35:55.271 "workload": "randread", 00:35:55.271 "status": "finished", 00:35:55.271 "queue_depth": 128, 00:35:55.271 "io_size": 4096, 00:35:55.271 "runtime": 1.005411, 00:35:55.271 "iops": 23492.880026178347, 00:35:55.271 "mibps": 91.76906260225917, 00:35:55.271 "io_failed": 0, 00:35:55.271 "io_timeout": 0, 00:35:55.271 "avg_latency_us": 5432.131532291586, 00:35:55.271 "min_latency_us": 4647.098181818182, 00:35:55.271 "max_latency_us": 9770.821818181817 00:35:55.271 } 00:35:55.271 ], 00:35:55.271 "core_count": 1 00:35:55.271 } 00:35:55.271 11:37:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:55.271 11:37:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:55.531 11:37:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:55.531 11:37:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:55.531 11:37:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:55.531 11:37:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:55.531 11:37:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:55.531 11:37:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.790 11:37:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:55.790 11:37:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:55.790 11:37:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:55.790 11:37:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:55.790 11:37:28 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:55.790 11:37:28 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:55.790 11:37:28 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:55.790 11:37:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:55.790 11:37:28 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:55.790 11:37:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:55.790 11:37:28 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:55.790 11:37:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:55.790 [2024-12-06 11:37:28.641184] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:55.790 [2024-12-06 11:37:28.641575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181c0c0 (107): Transport endpoint is not connected 00:35:55.790 [2024-12-06 11:37:28.642570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181c0c0 (9): Bad file descriptor 00:35:55.790 [2024-12-06 11:37:28.643571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:55.790 [2024-12-06 11:37:28.643581] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:55.790 [2024-12-06 11:37:28.643588] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:55.790 [2024-12-06 11:37:28.643596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:55.790 request: 00:35:55.790 { 00:35:55.790 "name": "nvme0", 00:35:55.790 "trtype": "tcp", 00:35:55.790 "traddr": "127.0.0.1", 00:35:55.790 "adrfam": "ipv4", 00:35:55.790 "trsvcid": "4420", 00:35:55.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.790 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.790 "prchk_reftag": false, 00:35:55.790 "prchk_guard": false, 00:35:55.790 "hdgst": false, 00:35:55.790 "ddgst": false, 00:35:55.790 "psk": ":spdk-test:key1", 00:35:55.790 "allow_unrecognized_csi": false, 00:35:55.790 "method": "bdev_nvme_attach_controller", 00:35:55.790 "req_id": 1 00:35:55.790 } 00:35:55.790 Got JSON-RPC error response 00:35:55.790 response: 00:35:55.790 { 00:35:55.791 "code": -5, 00:35:55.791 "message": "Input/output error" 00:35:55.791 } 00:35:55.791 11:37:28 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:55.791 11:37:28 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:55.791 11:37:28 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:55.791 11:37:28 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@33 -- # sn=202436492 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 202436492 00:35:55.791 1 links removed 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@33 -- # sn=319592871 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 319592871 00:35:55.791 1 links removed 00:35:55.791 11:37:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2018845 00:35:55.791 11:37:28 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2018845 ']' 00:35:55.791 11:37:28 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2018845 00:35:55.791 11:37:28 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:55.791 11:37:28 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.791 11:37:28 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018845 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018845' 00:35:56.050 killing process with pid 2018845 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 2018845 00:35:56.050 Received shutdown signal, test time was about 1.000000 seconds 00:35:56.050 00:35:56.050 Latency(us) 00:35:56.050 [2024-12-06T10:37:28.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.050 [2024-12-06T10:37:28.988Z] =================================================================================================================== 00:35:56.050 [2024-12-06T10:37:28.988Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 2018845 00:35:56.050 11:37:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2018585 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2018585 ']' 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2018585 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018585 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018585' 00:35:56.050 killing process with pid 2018585 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 2018585 00:35:56.050 11:37:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 2018585 00:35:56.309 00:35:56.309 real 0m4.696s 00:35:56.309 user 0m8.424s 00:35:56.309 sys 0m1.461s 00:35:56.309 11:37:29 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.309 11:37:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.309 ************************************ 00:35:56.309 END TEST keyring_linux 00:35:56.309 ************************************ 00:35:56.568 11:37:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:56.568 11:37:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:56.568 11:37:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:56.568 11:37:29 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:56.568 11:37:29 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:56.568 11:37:29 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:56.568 11:37:29 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:56.568 11:37:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:56.568 11:37:29 -- common/autotest_common.sh@10 -- # set +x 00:35:56.568 11:37:29 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:56.568 11:37:29 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:56.568 11:37:29 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:56.568 11:37:29 -- common/autotest_common.sh@10 -- # set +x 00:36:01.838 INFO: APP EXITING 00:36:01.838 INFO: killing all VMs 00:36:01.838 INFO: killing vhost app 00:36:01.838 INFO: EXIT DONE 00:36:05.128 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:36:05.128 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:05.128 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:08.421 Cleaning 00:36:08.421 Removing: /var/run/dpdk/spdk0/config 00:36:08.421 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:08.421 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:08.421 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:08.421 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:08.421 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:08.421 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:08.421 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:08.421 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:08.421 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:08.421 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:08.421 Removing: /var/run/dpdk/spdk1/config 00:36:08.421 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:08.421 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:08.421 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:08.421 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:08.421 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:08.421 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:08.421 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:08.421 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:08.421 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:08.421 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:08.421 Removing: /var/run/dpdk/spdk2/config 00:36:08.421 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:08.421 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:08.421 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:08.421 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:08.421 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:08.421 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:08.421 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:08.421 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:08.421 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:08.421 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:08.421 Removing: /var/run/dpdk/spdk3/config 00:36:08.421 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:08.421 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:08.421 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:08.421 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:08.421 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:08.421 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:08.421 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:08.421 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:08.421 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:08.421 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:08.421 Removing: /var/run/dpdk/spdk4/config 00:36:08.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:08.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:08.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:08.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:08.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:08.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:08.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:08.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:08.421 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:08.421 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:08.421 Removing: /dev/shm/bdev_svc_trace.1 00:36:08.421 Removing: /dev/shm/nvmf_trace.0 00:36:08.421 Removing: /dev/shm/spdk_tgt_trace.pid1508266 00:36:08.421 Removing: /var/run/dpdk/spdk0 00:36:08.421 Removing: /var/run/dpdk/spdk1 00:36:08.421 Removing: /var/run/dpdk/spdk2 00:36:08.421 Removing: /var/run/dpdk/spdk3 00:36:08.421 Removing: /var/run/dpdk/spdk4 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1505831 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1506942 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1508266 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1508898 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1509802 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1510076 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1511181 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1511446 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1511721 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1513352 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1514724 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1515041 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1515371 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1515714 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1516047 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1516331 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1516609 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1516954 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1517895 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1521430 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1521955 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1522262 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1522276 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1522828 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1522833 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1523392 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1523395 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1523687 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1523805 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1523994 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1524258 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1524716 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1524923 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1525253 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1529422 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1533984 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1544797 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1545450 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1550032 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1550393 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1554932 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1561320 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1564287 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1575591 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1585188 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1587025 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1588084 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1606178 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1610507 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1660172 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1665874 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1672171 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1679646 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1679649 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1680535 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1681488 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1682523 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1683126 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1683299 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1683575 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1683596 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1683598 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1684639 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1685458 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1686489 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1687271 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1687277 00:36:08.421 Removing: /var/run/dpdk/spdk_pid1687540 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1688947 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1690019 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1698742 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1728349 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1733185 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1734942 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1736854 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1737022 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1737138 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1737417 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1737998 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1740076 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1740924 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1741264 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1743766 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1744451 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1745030 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1749477 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1755713 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1755714 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1755715 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1759722 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1768431 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1772819 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1779288 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1780757 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1782255 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1783752 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1788805 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1793477 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1797621 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1806104 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1806106 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1810942 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1811194 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1811461 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1811974 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1811981 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1816546 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1817197 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1821816 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1824576 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1830165 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1835856 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1845049 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1852870 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1852926 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1872870 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1873444 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1873982 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1874523 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1875311 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1875899 00:36:08.422 Removing: /var/run/dpdk/spdk_pid1876450 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1876988 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1881545 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1881806 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1888197 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1888374 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1894196 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1898608 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1909580 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1910117 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1914674 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1914959 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1919260 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1925231 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1928070 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1938719 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1947866 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1949705 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1950870 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1968257 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1972302 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1975277 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1983643 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1983715 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1989010 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1991219 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1993260 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1994444 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1996647 00:36:08.681 Removing: /var/run/dpdk/spdk_pid1998051 00:36:08.681 Removing: /var/run/dpdk/spdk_pid2007810 00:36:08.681 Removing: /var/run/dpdk/spdk_pid2008332 00:36:08.681 Removing: /var/run/dpdk/spdk_pid2008855 00:36:08.682 Removing: /var/run/dpdk/spdk_pid2011313 00:36:08.682 Removing: /var/run/dpdk/spdk_pid2011842 00:36:08.682 Removing: /var/run/dpdk/spdk_pid2012371 00:36:08.682 Removing: /var/run/dpdk/spdk_pid2016445 00:36:08.682 Removing: /var/run/dpdk/spdk_pid2016508 00:36:08.682 Removing: /var/run/dpdk/spdk_pid2018111 00:36:08.682 Removing: /var/run/dpdk/spdk_pid2018585 00:36:08.682 Removing: /var/run/dpdk/spdk_pid2018845 00:36:08.682 Clean 00:36:08.682 11:37:41 -- common/autotest_common.sh@1453 -- # return 0 00:36:08.682 11:37:41 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:08.682 11:37:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:08.682 11:37:41 -- common/autotest_common.sh@10 -- # set +x 00:36:08.941 11:37:41 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:08.941 11:37:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:08.941 11:37:41 -- common/autotest_common.sh@10 -- # set +x 00:36:08.941 11:37:41 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:08.941 11:37:41 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:08.941 11:37:41 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:08.941 11:37:41 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:08.941 11:37:41 -- spdk/autotest.sh@398 -- # hostname 00:36:08.941 11:37:41 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:08.941 geninfo: WARNING: invalid characters removed from testname! 00:36:30.881 11:38:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:30.881 11:38:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:32.785 11:38:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:34.162 11:38:07 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:36.067 11:38:08 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:37.973 11:38:10 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:39.353 11:38:12 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:39.353 11:38:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:39.353 11:38:12 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:39.353 11:38:12 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:39.353 11:38:12 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:39.353 11:38:12 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:39.353 + [[ -n 1427166 ]] 00:36:39.353 + sudo kill 1427166 00:36:39.362 [Pipeline] } 00:36:39.379 [Pipeline] // stage 00:36:39.385 [Pipeline] } 00:36:39.399 [Pipeline] // timeout 00:36:39.404 [Pipeline] } 00:36:39.419 [Pipeline] // catchError 00:36:39.424 [Pipeline] } 00:36:39.439 [Pipeline] // wrap 00:36:39.445 [Pipeline] } 00:36:39.458 [Pipeline] // catchError 00:36:39.468 [Pipeline] stage 00:36:39.470 [Pipeline] { (Epilogue) 00:36:39.484 [Pipeline] catchError 00:36:39.485 [Pipeline] { 00:36:39.499 [Pipeline] echo 00:36:39.501 Cleanup processes 00:36:39.507 [Pipeline] sh 00:36:39.792 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:39.792 2029913 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:39.807 [Pipeline] sh 00:36:40.092 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:40.092 ++ grep -v 'sudo pgrep' 00:36:40.092 ++ awk '{print $1}' 00:36:40.092 + sudo kill -9 00:36:40.092 + true 00:36:40.105 [Pipeline] sh 00:36:40.389 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:50.407 [Pipeline] sh 00:36:50.747 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:50.747 Artifacts sizes are good 00:36:50.763 [Pipeline] archiveArtifacts 00:36:50.770 Archiving artifacts 00:36:50.892 [Pipeline] sh 00:36:51.172 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:51.190 [Pipeline] cleanWs 00:36:51.203 [WS-CLEANUP] Deleting project workspace... 00:36:51.204 [WS-CLEANUP] Deferred wipeout is used... 00:36:51.210 [WS-CLEANUP] done 00:36:51.212 [Pipeline] } 00:36:51.235 [Pipeline] // catchError 00:36:51.251 [Pipeline] sh 00:36:51.534 + logger -p user.info -t JENKINS-CI 00:36:51.543 [Pipeline] } 00:36:51.560 [Pipeline] // stage 00:36:51.567 [Pipeline] } 00:36:51.583 [Pipeline] // node 00:36:51.590 [Pipeline] End of Pipeline 00:36:51.626 Finished: SUCCESS